WorldWideScience

Sample records for research speech spokespersons

  1. 30 March 2009 - Representatives of the Danish Council for Independent Research Natural Sciences visiting the LHC tunnel at Point 1 with Collaboration Spokesperson F. Gianotti, Former Spokesperson P. Jenni and Transition Radiation Tracker Project Leader C. Rembser.

    CERN Document Server

    Maximilien Brice

    2009-01-01

    30 March 2009 - Representatives of the Danish Council for Independent Research Natural Sciences visiting the LHC tunnel at Point 1 with Collaboration Spokesperson F. Gianotti, Former Spokesperson P. Jenni and Transition Radiation Tracker Project Leader C. Rembser.

  2. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  3. National Science Foundation Assistant Director for Mathematics and Physical Sciences Tony Chan (USA) visiting CMS experiment on 23rd May 2007 with Spokesperson T. Virdee, Deputy Spokesperson R. Cousins, Advisor to CERN Director-General J. Ellis, US CMS Research Program Deputy Manager D. Marlow and FNAL D. Green

    CERN Multimedia

    Maximilien Brice

    2007-01-01

    National Science Foundation Assistant Director for Mathematics and Physical Sciences Tony Chan (USA) visiting CMS experiment on 23rd May 2007 with Spokesperson T. Virdee, Deputy Spokesperson R. Cousins, Advisor to CERN Director-General J. Ellis, US CMS Research Program Deputy Manager D. Marlow and FNAL D. Green

  4. 17 September 2013 - Estonian Minister of Education and Research J. Aaviksoo signing the guest book with CERN Director-General R- Heuer; visiting the TOTEM facility with TOTEM Collaboration Spokesperson S. Giani; in the LHC tunnel at Point 5 with International Relations Adviser T. Kurtyka and visiting the CMS cavern with CMS Collaboration Spokesperson J. Incandela. International Relations Adviser R. Voss present.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    17 September 2013 - Estonian Minister of Education and Research J. Aaviksoo signing the guest book with CERN Director-General R- Heuer; visiting the TOTEM facility with TOTEM Collaboration Spokesperson S. Giani; in the LHC tunnel at Point 5 with International Relations Adviser T. Kurtyka and visiting the CMS cavern with CMS Collaboration Spokesperson J. Incandela. International Relations Adviser R. Voss present.

  5. 15th March 2011 - Singapore National Research Foundation Permanent Secretary(National Research and Development)T. M. Kian signing the guest book with Head of International Relations F. Pauss and visiting CMS control centre with Collaboration Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    15th March 2011 - Singapore National Research Foundation Permanent Secretary(National Research and Development)T. M. Kian signing the guest book with Head of International Relations F. Pauss and visiting CMS control centre with Collaboration Spokesperson G. Tonelli.

  6. 18 MArch 2008 - Director, Basic and Generic Research Division, Research Promotion Bureau, Japanese Ministry of Education, Culture, Sports, Science and Technology Prof.Ohtake visiting ATLAS cavern with Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    18 MArch 2008 - Director, Basic and Generic Research Division, Research Promotion Bureau, Japanese Ministry of Education, Culture, Sports, Science and Technology Prof.Ohtake visiting ATLAS cavern with Spokesperson P. Jenni.

  7. 25th May 2011 - Egyptian Minister for Scientific Research, Science and Technology A. Ezzat Salama signing the guest book with CERN Director-General R. Heuer and visiting CMS control centre with Collaboration Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    He visited the CMS control room on the Meyrin site with, from left, CMS spokesperson, Guido Tonelli, Alaa Awad, Fayum University, Hisham Badr, ambassador at the UN Geneva, and Maged Elsherbiny, president of the Scientific Research Academy.

  8. 28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    28 October 2013- Former US Vice President A. Gore signing the guest book with Technology Department Head F. Bordry, Head of International Relations R. Voss, Director for Research and Scientific Computing S. Bertolucci and CMS Collaboration Spokesperson J. Incandela.

  9. Dr Mauro Dell’Ambrogio, State Secretary for Education and Research of the Swiss Confederation visit the ATLAS Cavern and the LHC Machine with with Collaboration Spokesperson P. Jenni and Technical Coordinator M. Nessi.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    Dr Mauro Dell’Ambrogio, State Secretary for Education and Research of the Swiss Confederation visit the ATLAS Cavern and the LHC Machine with with Collaboration Spokesperson P. Jenni and Technical Coordinator M. Nessi.

  10. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  11. 26th February 2009 - US Google Vice President and Chief Internet Evangelist V. Cerf signing the guest book with Director for research and Computing S. Bertolucci; visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    HI-0902038 05: IT Department Head, F. Hemmer; US Google Vice President and Chief Internet Evangelist V. Cerf; Computing Security Officer and Colloquium Convenor D. R. Myers; Member of the Internet Society Advisory Council F. Flückiger; Director for Research and Scientific Computing, S. Bertolucci ; Honorary Staff Member, B. Segal. HI-0902038 16: Computing Security Officer and Colloquium Convenor D. R. Myers; UC Irvine, ATLAS Deputy Spokesperson elect A. J. Lankford; US Google Vice President and Chief Internet Evangelist V. Cerf; ATLAS Collaboration Spokesperson P. Jenni; IT Department Head, F. Hemmer.

  12. 7th April 2011 - Romanian President of the National Authority for Scientific Research State Secretary Ministry for Education, Research, Youth and Sport D. M. Ciuparu signing the guest book with Director for Research S. Bertolucci and ALICE surface building with Collaboration Spokesperson P. Giubellino.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    7th April 2011 - Romanian President of the National Authority for Scientific Research State Secretary Ministry for Education, Research, Youth and Sport D. M. Ciuparu signing the guest book with Director for Research S. Bertolucci and ALICE surface building with Collaboration Spokesperson P. Giubellino.

  13. 11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

  14. Tuesday 28 January 2014 - K. E. Huthmacher Ministerialdirektor Provision for the Future - Basic and Sustainability Research Federal Ministry of Education and Research (BMBF) visiting the stands with R. Heuer CERN Director-General on the occasion of the Inauguration of the Industrial Exhibition Germany@CERN and visiting the ATLAS Cavern with D. Charlton ATLAS Collaboration Spokesperson and R. Voss Head of International Relations.

    CERN Multimedia

    Pantelia, Anna

    2014-01-01

    Tuesday 28 January - K. E. Huthmacher Ministerialdirektor Provision for the Future - Basic and Sustainability Research Federal Ministry of Education and Research (BMBF) visiting the stands with R. Heuer CERN Director-General on the occasion of the Inauguration of the Industrial Exhibition Germany@CERN and visiting the ATLAS Cavern with D. Charlton ATLAS Collaboration Spokesperson and R. Voss Head of International Relations.

  15. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  16. Perspectives of Metaphor Research in Business Speech Communication

    OpenAIRE

    清水,利宏

    2009-01-01

    This paper explores metaphor research, especially that of business speeches. By reviewing the research background of Conceptual Metaphor Theory and Blending Theory, the characteristics of business speeches--as the metaphor research target--are explained. The 'mental distance' concept between a source domain and a target domain is examined, and, with some illustrations, this paper explains that metaphorical expressions in business speeches should be analyzed not as a single and individual disc...

  17. 27 August 2013 - Signature of an Agreement between KTO Karatay University in Turkey represented by the Dean of Engineering Professor Ali Okatan, CERN represented by Director for Research and Computing Dr Sergio Bertolucci and ALICE Collaboration represented by ALICE Collaboration Spokesperson Dr Paolo Giubellino.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    27 August 2013 - Signature of an Agreement between KTO Karatay University in Turkey represented by the Dean of Engineering Professor Ali Okatan, CERN represented by Director for Research and Computing Dr Sergio Bertolucci and ALICE Collaboration represented by ALICE Collaboration Spokesperson Dr Paolo Giubellino.

  18. 28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

    CERN Document Server

    Maximilien Brice

    2011-01-01

    28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

  19. 19 July 2013 - Chairman of the Policy Committee, European Cancer Organisation, President, European Association for Cancer Research E. Celis visiting the ATLAS experimental cavern with ATLAS Collaboration Deputy Spokesperson, B. Heinemann and signing the Guest Book with Director for Accelerators and Technology S. Myers. Life Sciences Adviser M. Dosanjh present.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    19 July 2013 - Chairman of the Policy Committee, European Cancer Organisation, President, European Association for Cancer Research E. Celis visiting the ATLAS experimental cavern with ATLAS Collaboration Deputy Spokesperson, B. Heinemann and signing the Guest Book with Director for Accelerators and Technology S. Myers. Life Sciences Adviser M. Dosanjh present.

  20. 28 January 2011 - German State Secretary Ministry for Innovation, Science and Research of North Rhine-Westphalia H. Dockter in the ATLAS experimental cavern at LHC Point 1 with Former Spokesperson P. Jenni; signing the guest book with Adviser R. Voss.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    28 January 2011 - German State Secretary Ministry for Innovation, Science and Research of North Rhine-Westphalia H. Dockter in the ATLAS experimental cavern at LHC Point 1 with Former Spokesperson P. Jenni; signing the guest book with Adviser R. Voss.

  1. 14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

    CERN Multimedia

    Jean-claude Gadmer

    2011-01-01

    14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

  2. Mr Lars Leijonborg, Minister for Higher Education and Research of Sweden visiting the cavern ATLAS, the control room of ATLAS and the machine LHC at Point 1 with Collaboration Spokesperson P. Jenni and Dr. Jos Engelen, Chief Scientific Officer of CERN.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    Mr Lars Leijonborg, Minister for Higher Education and Research of Sweden visiting the cavern ATLAS, the control room of ATLAS and the machine LHC at Point 1 with Collaboration Spokesperson P. Jenni and Dr. Jos Engelen, Chief Scientific Officer of CERN.

  3. 27 February 2012 - Director of the Health Directorate at the Research DG European Commission R. Draghia-Akli in the ATLAS visitor centre with ATLAS Former Collaboration Spokesperson P. Jenni and Head of CERN EU Projects Office S. Stavrev; in the LHC superconducting magnet test hall with E. Todesco; and signing the guest book with CERN Director-General R. Heuer.

    CERN Multimedia

    Michel Blanc

    2012-01-01

    27 February 2012 - Director of the Health Directorate at the Research DG European Commission R. Draghia-Akli in the ATLAS visitor centre with ATLAS Former Collaboration Spokesperson P. Jenni and Head of CERN EU Projects Office S. Stavrev; in the LHC superconducting magnet test hall with E. Todesco; and signing the guest book with CERN Director-General R. Heuer.

  4. Philosophy of Research in Motor Speech Disorders

    Science.gov (United States)

    Weismer, Gary

    2006-01-01

    The primary objective of this position paper is to assess the theoretical and empirical support that exists for the Mayo Clinic view of motor speech disorders in general, and for oromotor, nonverbal tasks as a window to speech production processes in particular. Literature both in support of and against the Mayo clinic view and the associated use…

  5. Spokespersons in media campaigns of non-profit organizations

    Directory of Open Access Journals (Sweden)

    Milovanović Dragana

    2014-01-01

    Full Text Available The subject of this research is how spokespersons can be used in campaigns of non-profit organizations, with a goal to increase their visibility and gain public support. Namely, many companies employ celebrities for their media campaigns as protagonists and promoters of brand values. With their appearance and engagement, celebrities transfer part of their image and credibility to the brand, which widens and enriches the field of associations which brands trigger in consumers' conscience. Non-profit organizations could get similar benefits out of these campaigns. In a society where there is a certain level of fascination with celebrities, i.e. celebrity culture, their influence can be used not only to attract attention to the goods, but also to ideas. The goal of the paper is to show how spokespersons can influence behavior and attitudes of the public by participating in media campaigns, and also the important aspects of choosing a spokesperson. The paper is supposed to be a starting point for practitioners,so they can design creative ideas based on this technique on the non-profit organizations market, especially in Serbia.

  6. Jim Virdee, the new spokesperson of CMS

    CERN Multimedia

    2006-01-01

    Jim Virdee and Michel Della Negra. On 21 June Tejinder 'Jim'Virdee was elected by the CMS collaboration as its new spokesperson, his 3-year term of office beginning in January 2007. He will take over from Michel Della Negra, who has been CMS spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Dan Green from Fermilab, programme manager of the US-CMS collaboration and coordinator of the CMS Hadron Calorimeter project; Jim Virdee from Imperial College London and CERN, deputy spokesperson of CMS since 1993; Gigi Rolandi from the University of Trieste and CERN, ex-Aleph spokesperson and currently involved in the preparations of the physics analyses to be done with CMS. On the early evening of 21 June, 141 of the 142 members of the CMS collaboration board, some represented by proxies, took part in a secret ballot. After two rounds of voting Jim Virdee was elected as spokesperson with a clear majority. Jim thanked the CMS collaboration 'for putting conf...

  7. Guido Tonelli elected next CMS spokesperson

    CERN Multimedia

    2009-01-01

    Guido Tonelli has been elected as the next CMS spokesperson. He will take over from Jim Virdee on January 1, 2010, and will head the collaboration through the first crucial year of data-taking. Guido Tonelli, CMS spokesperson-elect, into the CMS cavern. "It will be very tough and there will be enormous pressure," explains Guido Tonelli, CMS spokesperson-elect. "It will be the first time that CMS will run for a whole year so it is important to go through the checklist to be able to take good quality data." Tonelli, who is currently CMS Deputy spokesperson, will take over from Jim Virdee on January 1, 2010 – only a few months into CMS’s first full year of data-taking. "The collisions will probably be different to our expectations. So it’s going to take the effort of the entire collaboration worldwide to be ready for this new phase." Born in Italy, Tonelli originally studied at the University of Pisa, where he is now a Professo...

  8. Status Report on Speech Research, 1 April-30 June 1982.

    Science.gov (United States)

    1982-01-01

    speech researchers to take a closer look at psychophysics. Thus, Macmillan, Kaplan, and Creelman (1977) attempted to fit categorical perception into the...published by Kaplan, Macmillan, and Creelman (1978), which make a fair comparison between the two tasks possible. Crowder also made the interstimulus...comparison of different paradigms for nonspeech discrimination (pure tone frequency or phase relationships) was conducted by Creelman and Macmillan (1979

  9. Analysis of speech: a reflection on health research

    Directory of Open Access Journals (Sweden)

    Laura Christina Macedo

    2008-01-01

    Full Text Available In this study, we take speech and writing as discursive construction, indicating the reasons for making it the object of analysis and introducing different instruments to achieve this. We highlight the importance of discourse analysis for the development of health research, since this method enables the interpretation of reality from a text or texts, revealing the subjects of production and their interpretation, as well as the context of their production. The historical construction of contradictions, continuities and ruptures that make discourse a social practice is unveiled. Discourse analysis is considered a means of eliciting the implied meaning in speech and writing and, thus, as another approach to the health-disease process. Therefore, this reflection aims to incorporate Discourse Analysis into the health area, emphasizing this method as a significant contribution to Social Sciences.

  10. 28 November 2013 - N. N. Kudryavtsev, Russian Rector of the Moscow Institute of Physics and Technology signing an Agreement and the Guest Book with CERN Director-General R. Heuer; visiting the ATLAS cavern with ATLAS Deputy Spokesperson B. Heinemann and visiting the LHC tunnel at Point 1 with AGH University of Science and Technology A. Erokhin. M. Savino, Physics Department, Joint Institute for Nuclear Research also present.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    28 November 2013 - N. N. Kudryavtsev, Russian Rector of the Moscow Institute of Physics and Technology signing an Agreement and the Guest Book with CERN Director-General R. Heuer; visiting the ATLAS cavern with ATLAS Deputy Spokesperson B. Heinemann and visiting the LHC tunnel at Point 1 with AGH University of Science and Technology A. Erokhin. M. Savino, Physics Department, Joint Institute for Nuclear Research also present.

  11. 10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

  12. 29 January 2009 - Italian Minister for Foreign Affairs F. Frattini, visiting the ATLAS experimental area with Director-General R. Heuer and Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Present during the ATLAS undegrround visit: Dr Fabiola Gianotti,ATLAS CollaborationDeputy Spokesperson and Spokesperson Designate; Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader; Prof. Guido Tonelli,CMS Collaboration, Deputy Spokesperson; Prof. Roberto Petronzio, INFN President. CERN participants present in the audience during the presentations by the Director-General R. Heuer and by Prof. Antonino Zichichi, ALICE Collaboration, University of Bologna: Prof. Sergio Bertolucci,Director for Research and Scientific Computing; Prof. Felicitas Pauss, Coordinator for External Relations Coordinator; Prof. Carlo Rubbia, CERN Former Director-General, Nobel Prize in Physics 1984; Dr Jurgen Schukraft, ALICE Collaboration Spokesperson. Members of the delegation in the audience: Ambassador to the UN, H. Exc. Mr Caracciolo di Vetri; Ambassador Alain G.M. Economides,Capo di Gabinetto; Prof. Antonio Bettanini\tCons. dell’On. Ministro per le Relazioni istituzionali; On. Mario Pescante and Min. Plen Maurizio Mas...

  13. Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech , Instrumentation for Its Investigation, and Practical Applications, 1 October-31 December 1971.

    Science.gov (United States)

    Turney, Michael T.; And Others

    This report on speech research contains papers describing experiments involving both information processing and speech production. The papers concerned with information processing cover such topics as peripheral and central processes in vision, separate speech and nonspeech processing in dichotic listening, and dichotic fusion along an acoustic…

  14. New spokesperson for the LHCb collaboration

    CERN Multimedia

    Katarina Anthony

    2011-01-01

    Pierluigi Campana begins his 3-year tenure as LHCb spokesperson this June. As the new voice for the collaboration, Campana will lead the experiment through what should prove to be a very exciting phase.   Pierluigi Campana, from the Istituto Nazionale di Fisica Nucleare in Frascati, has been with the collaboration since 2000 and was heavily involved in the construction of the muon chamber of the LHCb detector. He replaces Andrei Golutvin, from Imperial College London and Russia’s Institute for Theoretical and Experimental Physics. “Leading such a large collaboration is not an easy task,” says Campana. While he will rely heavily upon the work of his predecessor, he plans on leaving his own mark on the position: “One of the main goals of my job will be to enhance the spirit of collaboration between the different institutes within our experiment.” LHCb plays a key role in the search for new physics. The experiment is conducting a very precise search...

  15. Speech and other modalities in the office environment: Some research results

    NARCIS (Netherlands)

    Nes, van F.L.; Bullinger, H.-J.

    1991-01-01

    Research was carried out on the application of speech in three areas of man-computer communication: instruction, voice commands for system control and annotation of documents. As to instruction, learning was found to proceed equally fast with speech and written text; a number of subjects preferred

  16. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  17. Research and development of a versatile portable speech prosthesis

    Science.gov (United States)

    1981-01-01

    The Versatile Portable Speech Prosthesis (VPSP), a synthetic speech output communication aid for non-speaking people is described. It was intended initially for severely physically limited people with cerebral palsy who are in electric wheelchairs. Hence, it was designed to be placed on a wheelchair and powered from a wheelchair battery. It can easily be separated from the wheelchair. The VPSP is versatile because it is designed to accept any means of single switch, multiple switch, or keyboard control which physically limited people have the ability to use. It is portable because it is mounted on and can go with the electric wheelchair. It is a speech prosthesis, obviously, because it speaks with a synthetic voice for people unable to speak with their own voices. Both hardware and software are described.

  18. [The speech therapist in geriatrics: caregiver, technician-researcher, or both?].

    Science.gov (United States)

    Orellana, Blandine

    2015-01-01

    Geriatric care mostly consists not in curingthe patient, but supportingthem to the end of their life, giving meaning to care procedures and actions through speech, touch or look and maintaining a connection.The helping relationship is omnipresent and the role of the speech therapist is therefore essential in helping to maintain or re-establish elderly patients' abilityto communicate. However, todaythis role is struggling to define itself between that of the technician-researcher and that of caregiver.

  19. Evaluation of speech recognizers for use in advanced combat helicopter crew station research and development

    Science.gov (United States)

    Simpson, Carol A.

    1990-01-01

    The U.S. Army Crew Station Research and Development Facility uses vintage 1984 speech recognizers. An evaluation was performed of newer off-the-shelf speech recognition devices to determine whether newer technology performance and capabilities are substantially better than that of the Army's current speech recognizers. The Phonetic Discrimination (PD-100) Test was used to compare recognizer performance in two ambient noise conditions: quiet office and helicopter noise. Test tokens were spoken by males and females and in isolated-word and connected-work mode. Better overall recognition accuracy was obtained from the newer recognizers. Recognizer capabilities needed to support the development of human factors design requirements for speech command systems in advanced combat helicopters are listed.

  20. Collaborative learning for public relations: Frame analysis in training for spokespersons

    Directory of Open Access Journals (Sweden)

    Sergio Álvarez Sánchez

    2018-05-01

    Full Text Available The collaborative model for learning implies students forming teams in order to reach a common goal. The objectives of this research are both exploring the impact of the collaborative model over the performance of those learners who study contents related to the formation of spokespersons for organizations; and evaluating the potential of frame analysis as a content for training in public relations. To delve into those issues, a case study exercise was administered to six groups of students of the “Training for Spokespersons” subject, consisting of analyzing the audiovisual intervention of a spokesperson talking on behalf of a strike commitee, and answering questions about target publics and frames of reference. The exercise succeeded in helping the students understand the role of emotional communication; however, they still got slightly confused about frame analysis and its link with the concept of social norm. For future research, it becomes necessary to focus on moving even more away from the classic master classes, as well as using cases that students can feel closer to their interests. With respect to frame analysis, the results encourage the teaching of more precise classifications in terms of general frames about a certain topic, and specific frames about particular situations.

  1. Fabiola Gianotti, the newly elected Spokesperson of ATLAS

    CERN Multimedia

    2008-01-01

    On 11 July Fabiola Gianotti was elected by the ATLAS Collaboration as its future Spokesperson. Her term of office will start on 1 March 2009 and will last for two years. She will take over from Peter Jenni who has been ATLAS Spokesperson since its formalization in 1992. Three distinguished physicists stood as candidates for this election: Fabiola Gianotti (CERN), Marzio Nessi (CERN), and Leonardo Rossi (INFN Genova, Italy). The nomination process started on 30 October 2007, with a general email sent to the ATLAS collaboration calling for nominations, and closed on 25 January 2008. Any ATLAS physicist could nominate a candidate, and 24 nominees were proposed before the ATLAS search committee narrowed them to the final three. After the voting process, which concluded the ATLAS general meeting in Bern, the Collaboration Board greeted the result with warm applause.

  2. Narrative Exemplars and the Celebrity Spokesperson in Lebanese Anti-Domestic Violence Public Service Announcements.

    Science.gov (United States)

    El-Khoury, Jessica R; Shafer, Autumn

    2016-08-01

    Domestic violence is a worldwide epidemic. This study examines the effects of narrative exemplars and a celebrity spokesperson in anti-domestic violence ads on Lebanese college students' attitudes and beliefs towards domestic violence and whether these effects are impacted by personal experience. The practical significance is derived from the high prevalence of domestic violence internationally, making it important to find ways to effectively use media to address this health-related issue that has huge consequences for the individual and society. This study adds to the theoretical understanding of narrative persuasion and media effects. Results indicated that narrative exemplars in anti-domestic violence ads promoting bystander awareness and intervention were more beneficial for people without relevant experience compared to people who know someone affected by domestic violence. Anti-domestic violence ads without narrative exemplars, but that also featured an emotional self-efficacy appeal targeting bystanders, were more effective for participants who know someone who had experienced domestic violence compared to participants without relevant experience. The presence of a celebrity spokesperson elicited more positive attitudes about the ad than a noncelebrity, but failed to directly affect relevant anti-domestic violence attitudes or beliefs. These results highlight the significance of formative audience research in health communication message design.

  3. From persuasive to authoritative speech genres Writing accounting research for a practitioner audience

    NARCIS (Netherlands)

    Norreklit, Hanne; Scapens, Robert W.

    2014-01-01

    Purpose - The purpose of this paper is to contrast the speech genres in the original and the published versions of an article written by academic researchers and published in the US practitioner-oriented journal, Strategic Finance. The original version, submitted by the researchers, was rewritten by

  4. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  5. [Potential analysis of research on speech therapy-led communication training in aphasia following stroke].

    Science.gov (United States)

    Kempf, Sabrina; Lauer, Norina; Corsten, Sabine; Voigt-Radloff, Sebastian

    2014-01-01

    In Germany, about 100,000 people currently suffer from aphasia. This speech disorder occurs as a result of neurologic events such as stroke or traumatic brain injury. Aphasia causes major limitations in social participation and quality of life and can be associated with unemployability and social isolation. For affected persons, it is essential to regain and maintain autonomy in daily life, both at work and with family and friends. The loss of autonomy is perceived much more dramatically than the loss of speech. Clients wish to minimise this loss of autonomy in daily life. As full recovery is not achievable in chronic aphasia, treatment must focus on improved compensatory approaches and on supporting the clients' coping strategies. Based on eight randomised comparisons including 347 participants, a recent Cochrane review (Brady et al., 2012) revealed that speech therapy - as compared with no treatment - had positive effects on functional communication in clients suffering from aphasia (0.30 SMD; 95% CI[0.08 to 0.52]). There was no evidence suggesting that one type of training was superior to the others. However, quality of life and social participation were not evaluated as outcomes. Recent studies found that speech therapy-led training for communication and self-efficacy and the integration of communication partners may have a positive impact on these client-centred outcomes. Speech therapy-led training for communication within a group setting should be manualised and pilot-tested with respect to feasibility and acceptance in a German sample of people with aphasia and their communication partners. Instruments measuring quality of life and social participation can be validated within the scope of this feasibility study. These research efforts are necessary to prepare a large-scale comparative effectiveness research trial comparing the effects of both usual speech therapy and speech therapy-led group communication training on quality of life and social participation

  6. Speech perception under adverse conditions: Insights from behavioral, computational and neuroscience research

    Directory of Open Access Journals (Sweden)

    Sara eGuediche

    2014-01-01

    Full Text Available Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this review article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. In particular, we consider several domains of neuroscience research that offer insight into how perception can be adaptively tuned to short-term deviations while also maintaining without affecting the long-term learned regularities for mapping sensory input. We review several literatures to highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Better understanding the application and limitations of these algorithms for the challenges of flexible speech perception under adverse conditions promises to inform theoretical models of speech.

  7. Paradigms, pragmatism and possibilities: mixed-methods research in speech and language therapy.

    Science.gov (United States)

    Glogowska, Margaret

    2011-01-01

    After the decades of the so-called 'paradigm wars' in social science research methodology and the controversy about the relative place and value of quantitative and qualitative research methodologies, 'paradigm peace' appears to have now been declared. This has come about as many researchers have begun to take a 'pragmatic' approach in the selection of research methodology, choosing the methodology best suited to answering the research question rather than conforming to a methodological orthodoxy. With the differences in the philosophical underpinnings of the two traditions set to one side, an increasing awareness, and valuing, of the 'mixed-methods' approach to research is now present in the fields of social, educational and health research. To explore what is meant by mixed-methods research and the ways in which quantitative and qualitative methodologies and methods can be combined and integrated, particularly in the broad field of health services research and the narrower one of speech and language therapy. The paper discusses the ways in which methodological approaches have already been combined and integrated in health services research and speech and language therapy, highlighting the suitability of mixed-methods research for answering the typically multifaceted questions arising from the provision of complex interventions. The challenges of combining and integrating quantitative and qualitative methods and the barriers to the adoption of mixed-methods approaches are also considered. The questions about healthcare, as it is being provided in the 21st century, calls for a range of methodological approaches. This is particularly the case for human communication and its disorders, where mixed-methods research offers a wealth of possibilities. In turn, speech and language therapy research should be able to contribute substantively to the future development of mixed-methods research. © 2010 Royal College of Speech & Language Therapists.

  8. Five Decades of Research in Speech Motor Control: What Have We Learned, and Where Should We Go from Here?

    Science.gov (United States)

    Perkell, Joseph S.

    2013-01-01

    Purpose: The author presents a view of research in speech motor control over the past 5 decades, as observed from within Ken Stevens's Speech Communication Group (SCG) in the Research Laboratory of Electronics at MIT. Method: The author presents a limited overview of some important developments and discoveries. The perspective is based…

  9. The Experimental Social Scientific Model in Speech Communication Research: Influences and Consequences.

    Science.gov (United States)

    Ferris, Sharmila Pixy

    A substantial number of published articles in speech communication research today is experimental/social scientific in nature. It is only in the past decade that scholars have begun to put the history of communication under the lens. Early advocates of the adoption of the method of social scientific inquiry were J. A. Winans, J. M. O'Neill, and C.…

  10. Report of the Research Priorities Division of the Speech Communication Association.

    Science.gov (United States)

    Bitzer, Lloyd F.; And Others

    A wide variety of topics are discussed in relation to research needs and classified in relation to problem areas, decision-making areas, and recommendations. Areas under discussion include an examination of the decision-making structure of the Speech Communication Association, criteria by which decisions can be evaluated, conceptualizing the…

  11. Grounded Theory as a Method for Research in Speech and Language Therapy

    Science.gov (United States)

    Skeat, J.; Perry, A.

    2008-01-01

    Background: The use of qualitative methodologies in speech and language therapy has grown over the past two decades, and there is now a body of literature, both generally describing qualitative research, and detailing its applicability to health practice(s). However, there has been only limited profession-specific discussion of qualitative…

  12. Effects of apologies and crisis responsibility on corporate and spokesperson reputation

    NARCIS (Netherlands)

    Verhoeven, Joost W.M.; van Hoof, Joris Jasper; ter Keurs, Han; van Vuuren, Hubrecht A.

    2012-01-01

    This study is aimed at the effects of making apologies in a crisis situation and attributed crisis responsibility on corporate- and spokesperson reputation. In a 2 × 2 scenario experiment (spokesperson making apologies versus no apologies; and accidental versus preventable crisis), 84 respondents

  13. When in Rome? The Effects of Spokesperson Ethnicity on Audience Evaluation of Crisis Communication.

    Science.gov (United States)

    Arpan, Laura M.

    2002-01-01

    Examines the effects of using organizational spokespersons of ethnic backgrounds similar to or different from possible stakeholders of a multinational organization. Finds that the degree to which undergraduate students identified with his or her own ethnic group affected spokesperson similarity ratings. Discusses implications for multinational…

  14. Federico Antinori elected as the new ALICE Spokesperson

    CERN Multimedia

    Iva Raynova

    2016-01-01

    On 8 April 2016 the ALICE Collaboration Board elected Federico Antinori from INFN Padova (Italy) as the new ALICE Spokesperson.   During his three-year mandate, starting in January 2017, he will lead a collaboration of more than 1500 people from 154 physics institutes across the globe. Antinori has been a member of the collaboration ever since it was created and he has already held many senior leadership positions. Currently he is the experiment’s Physics Coordinator and as such he has the responsibility to overview the whole sector of physics analysis. During his mandate ALICE has produced many of its most prominent results. Before that he was the Coordinator of the Heavy Ion First Physics Task Force, charged with the analysis of the first Pb-Pb data samples. In 2007 and 2008 Federico served as ALICE Deputy Spokesperson. He was also the first ALICE Trigger Coordinator, having a central role in defining the experiment’s trigger menus from the first run in 2009 until the end of...

  15. Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field.

    Science.gov (United States)

    Raghavan, Ramesh; Camarata, Stephen; White, Karl; Barbaresi, William; Parish, Susan; Krahn, Gloria

    2018-05-17

    The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions. Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders. Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level. Greater support for inclusion of speech and language disorder-relevant questions is necessary in national health surveys to build the population science in the field.

  16. Status report on speech research. A report on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications

    Science.gov (United States)

    Liberman, A. M.

    1985-10-01

    This interim status report on speech research discusses the following topics: On Vagueness and Fictions as Cornerstones of a Theory of Perceiving and Acting: A Comment on Walter (1983); The Informational Support for Upright Stance; Determining the Extent of Coarticulation-effects of Experimental Design; The Roles of Phoneme Frequency, Similarity, and Availability in the Experimental Elicitation of Speech Errors; On Learning to Speak; The Motor Theory of Speech Perception Revised; Linguistic and Acoustic Correlates of the Perceptual Structure Found in an Individual Differences Scaling Study of Vowels; Perceptual Coherence of Speech: Stability of Silence-cued Stop Consonants; Development of the Speech Perceptuomotor System; Dependence of Reading on Orthography-Investigations in Serbo-Croatian; The Relationship between Knowledge of Derivational Morphology and Spelling Ability in Fourth, Sixth, and Eighth Graders; Relations among Regular and Irregular, Morphologically-Related Words in the Lexicon as Revealed by Repetition Priming; Grammatical Priming of Inflected Nouns by the Gender of Possessive Adjectives; Grammatical Priming of Inflected Nouns by Inflected Adjectives; Deaf Signers and Serial Recall in the Visual Modality-Memory for Signs, Fingerspelling, and Print; Did Orthographies Evolve?; The Development of Children's Sensitivity to Factors Inf luencing Vowel Reading.

  17. Status Report on Speech Research, July-December 1981.

    Science.gov (United States)

    1981-01-01

    there is a difference in the intercept of the beat straight-line fit for /i-u/ and /u- u/ cases; that is, rounding for the second vowel begins earlier... BINAURAL ] PRESENTATION I I tI I I I I I I I •I II II I ,V III I P" J II I base isolated transitions (to one ear) (to other. ear) DUPLEX-PRODUCING...frequency modulation, particularly as they relate to classical auditory phenomena such as beats and periodicity pitch. In general, however, research on

  18. The applicability of normalisation process theory to speech and language therapy: a review of qualitative research on a speech and language intervention.

    Science.gov (United States)

    James, Deborah M

    2011-08-12

    The Bercow review found a high level of public dissatisfaction with speech and language services for children. Children with speech, language, and communication needs (SLCN) often have chronic complex conditions that require provision from health, education, and community services. Speech and language therapists are a small group of Allied Health Professionals with a specialist skill-set that equips them to work with children with SLCN. They work within and across the diverse range of public service providers. The aim of this review was to explore the applicability of Normalisation Process Theory (NPT) to the case of speech and language therapy. A review of qualitative research on a successfully embedded speech and language therapy intervention was undertaken to test the applicability of NPT. The review focused on two of the collective action elements of NPT (relational integration and interaction workability) using all previously published qualitative data from both parents and practitioners' perspectives on the intervention. The synthesis of the data based on the Normalisation Process Model (NPM) uncovered strengths in the interpersonal processes between the practitioners and parents, and weaknesses in how the accountability of the intervention is distributed in the health system. The analysis based on the NPM uncovered interpersonal processes between the practitioners and parents that were likely to have given rise to successful implementation of the intervention. In previous qualitative research on this intervention where the Medical Research Council's guidance on developing a design for a complex intervention had been used as a framework, the interpersonal work within the intervention had emerged as a barrier to implementation of the intervention. It is suggested that the design of services for children and families needs to extend beyond the consideration of benefits and barriers to embrace the social processes that appear to afford success in embedding

  19. Speech Research.

    Science.gov (United States)

    1979-12-31

    629. Mattis, S., French, J. H., & Rapin, I. Dyslexia in children and young adults: Three independent neuropsychological syndromes. Developmental...Knights & D. K. Bakker (Eds.), Neuropsychology ofA learning disorders: Theoretical approaches. Baltimore: University Park Press, 1976. Shankweiler...sounds connected with comfort, discomfort, and hunger . When babbling appears, it is mixed in with cooing but distinguished by its syllable-like

  20. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  1. Status Report on Speech Research. A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    Science.gov (United States)

    1983-01-01

    Unpublished data. 2. W _r, S. A. Fingerspelling by computer (Technical Report 212). Irtitute for Mathematical Studies in the Social Sciences, Stanford...Report on Speech Research, 1980, SR-61, 135- 150. Meadow, K. P. Early manual communication in relation to the deaf child’s intellectual, social , and...aggregates cannot be understood in terms of extrapolations from so-called simple circuits. As we remarked earlier in this paper, constructionism breaks

  2. Unique Contributors to the Curriculum: From Research to Practice for Speech-Language Pathologists in Schools.

    Science.gov (United States)

    Powell, Rachel K

    2018-04-05

    This lead article of the Clinical Forum focuses on the research that supports why speech-language pathologists (SLPs) are an integral part of the overarching curriculum for all students in schools. Focus on education has shifted to student performance in our global world, specifically in college and career readiness standards. This article reviews recommendations on best practice from the American Speech-Language-Hearing Association on SLPs' roles in schools, as well as data on school-based services. Implementation of these practices as it is applicable to school initiatives will be explored. Methods of interventions available in schools, from general education to special education, will be discussed based on national guidelines for a Response to Intervention and Multi-Tiered System of Support. Research regarding teacher knowledge of the linguistic principles of reading instruction will be explored, as well as correlation between teacher knowledge and student performance. The implications for how SLPs as the linguistic experts offer unique roles in curriculum and the evidence available to support this role will be explored. Implications for future research needs will be discussed. The demands of a highly rigorous curriculum allow SLPs a unique opportunity to apply their knowledge in linguistic principles to increase student performance and achievement. With the increased focus on student achievement, growth outcome measures, and value-added incentives, it is critical that SLPs become contributors to the curriculum for all students and that data to support this role are gathered through focused research initiatives.

  3. A Research of Speech Emotion Recognition Based on Deep Belief Network and SVM

    Directory of Open Access Journals (Sweden)

    Chenchen Huang

    2014-01-01

    Full Text Available Feature extraction is a very important part in speech emotion recognition, and in allusion to feature extraction in speech emotion recognition problems, this paper proposed a new method of feature extraction, using DBNs in DNN to extract emotional features in speech signal automatically. By training a 5 layers depth DBNs, to extract speech emotion feature and incorporate multiple consecutive frames to form a high dimensional feature. The features after training in DBNs were the input of nonlinear SVM classifier, and finally speech emotion recognition multiple classifier system was achieved. The speech emotion recognition rate of the system reached 86.5%, which was 7% higher than the original method.

  4. New CMS spokesperson: “An honour to be chosen to lead a spectacular collection of people”

    CERN Multimedia

    Achintya Rao

    2016-01-01

    Fermilab’s Joel Butler will take the reins of the CMS collaboration in September, after having been elected as its new spokesperson during the last CMS Week.   Joel Butler, new CMS spokesperson. (Image: Reidar Hahn/Fermilab) On 10 February, members of the CMS Collaboration Board, the “parliament” of the collaboration, held a ballot to appoint their next leader. The Board chose Joel Butler, who brings a wealth of experience – more than thirty years at Fermilab and more than ten of those with CMS – to this important management role, leading a collaboration of 3000 people from across the globe. High on Joel’s priority list is making sure that all collaborators are able to participate in the collaboration’s research easily and to the best of their abilities: “We need everybody to be involved in CMS, whether they’re big or small institutions,” he says in his office in CERN’s Building ...

  5. Speech-language pathology research in the Philippines in retrospect: Perspectives from a developing country.

    Science.gov (United States)

    Bondoc, Ivan Paul; Mabag, Viannery; Dacanay, Clarisse Anne; Macapagal, Natasha Daryle

    2017-12-01

    There is a need for speech-language pathology (SLP) research in the Philippines, in order to fill in knowledge gaps relevant to the local context. Information about the local SLP research status remains inadequate. This study describes local SLP research done over the almost past four decades. Using a descriptive retrospective design, a search was made for all empirical research articles completed by Filipino SLPs from 1978 to 2015. A total of 250 research articles were identified and described along several parameters. A predominant number were authored by the SLPs in the academe (97.20%). There was a focus on language (27.60%) and the nature of communication/swallowing disorders (20.80%). More than half utilised quantitative exploratory research designs (69.20%). Several used survey forms to generate data (38.41%). Nearly all were unpublished (93.60%) and were unfunded (94.80%). The current study revealed a dearth of research studies, limited diversity of research articles, limited research dissemination and funding concerns. It is suggested that the results of the current study can serve as a reference point to restructure research systems in the Philippines and in other developing countries, and offer data that can be used to develop a research agenda for the profession.

  6. Research Paper: Investigation of Acoustic Characteristics of Speech Motor Control in Children Who Stutter and Children Who Do Not Stutter

    Directory of Open Access Journals (Sweden)

    Fatemeh Fakar Gharamaleki

    2016-11-01

    Full Text Available Objective Stuttering is a developmental disorder of speech fluency with unknown causes. One of the proposed theories in this field is deficits in speech motor control that is associated with damaged control, timing, and coordination of the speech muscles. Fundamental frequency, fundamental frequency range, intensity, intensity range, and voice onset time are the most important acoustic components that are often used for indirect evaluation of physiological functions underlying the mechanisms of speech motor control. The purpose of this investigation was to compare some of the acoustic characteristics of speech motor control in children who stutter and children who do not stutter. Materials & Methods This research is a descriptive-analytic and cross-sectional comparative study. A total of 25 Azari-Persian bilingual boys who stutter (stutters group and 23 Azari-Persian bilinguals and 21 Persian monolingual boys who do not stutter (non-stutters group in the age range of 6 to 10 years participated in this study. Children participated in /a/ and /i/ vowels prolongation and carrier phrase repetition tasks for the analysis of some of their acoustic characteristics including fundamental frequency, fundamental frequency range, intensity, intensity range, and voice onset time. The PRAAT software was used for acoustic analysis. SPSS software (version 17, one-way ANOVA, and Kruskal-Wallis test were used for analyzing the data. Results The results indicated that there were no significant differences between the stutters and non-stutters groups (P>0.05 with respect to the acoustic features of speech motor control . Conclusion No significant group differences were observed in all of the dependent variables reported in this study. Thus, the results of this research do not support the notion of aberrant speech motor control in children who stutter.

  7. Celebrity endorsements versus created spokespersons in advertising: a survey among students

    Directory of Open Access Journals (Sweden)

    Delarey Van der Waldt

    2011-08-01

    Full Text Available In this study the use of endorsements in advertising was investigated.  Endorsements can either be in the form of a celebrity acting as a spokesperson for an organisation or the organisation can create a spokesperson to act as an endorser.  The problem that faces marketers is that little scientific proof exists if students perceive celebrity endorsements and creative spokespersons differently with regard to their expertise and trustworthiness.  The aim of this study was to determine the attitudes of respondents with regard to expertise, trustworthiness and attractiveness of created spokesperson and celebrity endorsements in advertisements. This knowledge will provide marketing professionals with the strategic advantage of how and when to make use of an endorser. Ohanian’s (1990 measurement scale of perceived expertise, trustworthiness and attractiveness was adopted in a self-administrative questionnaire for this article.  Respondents (n=185 were exposed to six visual images of endorsers namely:  three celebrities and three created spokespersons.  It was found that attractiveness should not be used as a factor when comparing created endorsers with celebrity endorsers.  The respondents perceived both endorsement applications as highly credible and professionals need to consider each application’s advantages and disadvantages when deciding which application will be more effective for their advertising strategy.  In the long term the organisation might find it more cost effective to create its own spokesperson due to the risk of possible characteristics changes or negative associations of celebrity endorsers.  Revoking advertisements after celebrity endorsers have received negative publicity or changed character can lead to great financial losses.  Created endorsers, on the other hand, provide the organisation with greater control and the ability to change to adapt to the organisations market and advertising needs.

  8. Research of Features of the Phonetic System of Speech and Identification of Announcers on the Voice

    Directory of Open Access Journals (Sweden)

    Roman Aleksandrovich Vasilyev

    2013-02-01

    Full Text Available In the work the method of the phonetic analysis of speech — allocation of the list of elementary speech units such as separate phonemes from a continuous stream of informal conversation of the specific announcer is offered. The practical algorithm of identification of the announcer — process of definition speaking of the set of announcers is described.

  9. 8 February 2010: University College London President & Provost M. Grant signing the guest book with CERN Director-General R.Heuer and Coordinator for External Relations F. Pauss; visiting the ATLAS control room with Collaboration Spokesperson F. Gianotti and Adviser for Non-Member States J. Ellis.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    Caption for photograph 1002015 01: 8 February 2010: University College London President & Provost M. Grant (6th from left) visiting ATLAS control room with, from left to right, ATLAS Deputy Spokesperson and University of Birmingham D. Charlton; UCL Head of the HEP group M. Lancaster; UCL Vice Provost for research D. Price; ATLAS Collaboration Spokesperson F. Gianotti; UCL Department of Physics and Astronomy N. Konstantinidis; UCL Head of Physics Department J. Tennyson; Head of the UCL-ATLAS group and Vice-Dean for Research in the faculty of Mathematical and Physical Sciences J. Butterworth, visiting the ATLAS control room.

  10. Speech Intelligibility

    Science.gov (United States)

    Brand, Thomas

    Speech intelligibility (SI) is important for different fields of research, engineering and diagnostics in order to quantify very different phenomena like the quality of recordings, communication and playback devices, the reverberation of auditoria, characteristics of hearing impairment, benefit using hearing aids or combinations of these things.

  11. Youth Gambling Prevention: Can Public Service Announcements Featuring Celebrity Spokespersons Be Effective?

    Science.gov (United States)

    Shead, N. Will; Walsh, Kelly; Taylor, Amy; Derevensky, Jeffrey L.; Gupta, Rina

    2011-01-01

    Children and adolescents are at increased risk of developing gambling problems compared to adults. A review of successful prevention campaigns targeting drinking and driving, smoking, unprotected sex, and drug use suggests that public service announcements (PSAs) featuring celebrity spokespersons have strong potential for raising awareness of the…

  12. Use of Automated Scoring in Spoken Language Assessments for Test Takers with Speech Impairments. Research Report. ETS RR-17-42

    Science.gov (United States)

    Loukina, Anastassia; Buzick, Heather

    2017-01-01

    This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…

  13. Research Into the Use of Speech Recognition Enhanced Microworlds in an Authorable Language Tutor

    National Research Council Canada - National Science Library

    Plott, Beth

    1999-01-01

    .... Once the first microworld exercise was completed and integrated into MILT, ARI funded the investigation of the use of discreet speech recognition technology in language learning using the microworld exercise as a basis...

  14. Acquisition of Speech Acts by Second Language Learners : Suggestion for future research on Japanese language education

    OpenAIRE

    畑佐, 由紀子

    2014-01-01

    This paper examines previous studies on the use and acquisition of speech acts by second language learners in order to identify issues that are yet to be investigated. The paper begins with a brief overview of the theoretical background for L2 speech act theory. Then, factors that affect native speakers’ choice of expressions are explained and the extent to which they are investigated in L2 pragmatic studies is considered. Thirdly, the strengths and weaknesses of methodology employed are disc...

  15. Research on the optoacoustic communication system for speech transmission by variable laser-pulse repetition rates

    Science.gov (United States)

    Jiang, Hongyan; Qiu, Hongbing; He, Ning; Liao, Xin

    2018-06-01

    For the optoacoustic communication from in-air platforms to submerged apparatus, a method based on speech recognition and variable laser-pulse repetition rates is proposed, which realizes character encoding and transmission for speech. Firstly, the theories and spectrum characteristics of the laser-generated underwater sound are analyzed; and moreover character conversion and encoding for speech as well as the pattern of codes for laser modulation is studied; lastly experiments to verify the system design are carried out. Results show that the optoacoustic system, where laser modulation is controlled by speech-to-character baseband codes, is beneficial to improve flexibility in receiving location for underwater targets as well as real-time performance in information transmission. In the overwater transmitter, a pulse laser is controlled to radiate by speech signals with several repetition rates randomly selected in the range of one to fifty Hz, and then in the underwater receiver laser pulse repetition rate and data can be acquired by the preamble and information codes of the corresponding laser-generated sound. When the energy of the laser pulse is appropriate, real-time transmission for speaker-independent speech can be realized in that way, which solves the problem of underwater bandwidth resource and provides a technical approach for the air-sea communication.

  16. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  17. The Sources of Authority for Shamanic Speech: Examples from the Kham-Magar of Nepal

    Directory of Open Access Journals (Sweden)

    Anne de Sales

    2016-10-01

    Full Text Available This article tries to identify the sources of authority that allow the ritual specialists of the Kham-Magar community to act as its spokespersons with invisible partners and say the truth. The author partly challenges Bourdieu’s vision that ritual techniques such as ritual language are mainly techniques of domination. She explores, rather, the truth conditions of shamanic speech and the pragmatic effects of the ritual use of language, including a complex definition of the performer.

  18. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  19. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  20. 27 Febuary 2012 - US DoE Associate Director of Science for High Energy Physics J. Siegrist visiting the LHC superconducting magnet test hall with adviser J.-P. Koutchouk and engineer M. Bajko; in CMS experimental cavern with Spokesperson J. Incadela;in ATLAS experimental cavern with Deputy Spokesperson A. Lankford; in ALICE experimental cavern with Spokesperson P. Giubellino; signing the guest book with Director for Accelerators and Technology S. Myers.

    CERN Multimedia

    Laurent Egli

    2012-01-01

    27 Febuary 2012 - US DoE Associate Director of Science for High Energy Physics J. Siegrist visiting the LHC superconducting magnet test hall with adviser J.-P. Koutchouk and engineer M. Bajko; in CMS experimental cavern with Spokesperson J. Incadela;in ATLAS experimental cavern with Deputy Spokesperson A. Lankford; in ALICE experimental cavern with Spokesperson P. Giubellino; signing the guest book with Director for Accelerators and Technology S. Myers.

  1. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  2. Leaders as Corporate Responsibility Spokesperson: How Leaders Explain Liabilites Via Corporate Web Sites?

    Directory of Open Access Journals (Sweden)

    Burcu Öksüz

    2014-12-01

    Full Text Available The aim of this paper is to reveal the corporate social responsibility (CSR understandings of corporations from the leaders’ perspective and discuss how leaders define and explain CSR practices their organizations executed as spokesperson via social media channels of their organizations.  In this context, a content analysis aiming to display the ideas of Turkey’s top 250 corporations’ leaders (CEO, chairman of the board, general manager designated by Istanbul Chamber of Industry in 2013. The leader messages about different dimensions of CSR and CSR practices that are partaking in corporate web sites were examined. According to the results of the analysis, it is found that the leaders act as responsible leaders, and also the spokesperson of their corporations. In addition it is found out that responsible leaders included multiplexed information on different dimensions and various practices of CSR in their social media messages.

  3. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial.

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate "Correct"/"Incorrect" feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a "Wizard of Oz" experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human "Wizard" will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  4. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Science.gov (United States)

    Doubé, Wendy; Carding, Paul; Flanagan, Kieran; Kaufman, Jordy; Armitage, Hannah

    2018-01-01

    Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children. PMID:29674986

  5. Comparing Feedback Types in Multimedia Learning of Speech by Young Children With Common Speech Sound Disorders: Research Protocol for a Pretest Posttest Independent Measures Control Trial

    Directory of Open Access Journals (Sweden)

    Wendy Doubé

    2018-04-01

    Full Text Available Children with speech sound disorders benefit from feedback about the accuracy of sounds they make. Home practice can reinforce feedback received from speech pathologists. Games in mobile device applications could encourage home practice, but those currently available are of limited value because they are unlikely to elaborate “Correct”/”Incorrect” feedback with information that can assist in improving the accuracy of the sound. This protocol proposes a “Wizard of Oz” experiment that aims to provide evidence for the provision of effective multimedia feedback for speech sound development. Children with two common speech sound disorders will play a game on a mobile device and make speech sounds when prompted by the game. A human “Wizard” will provide feedback on the accuracy of the sound but the children will perceive the feedback as coming from the game. Groups of 30 young children will be randomly allocated to one of five conditions: four types of feedback and a control which does not play the game. The results of this experiment will inform not only speech sound therapy, but also other types of language learning, both in general, and in multimedia applications. This experiment is a cost-effective precursor to the development of a mobile application that employs pedagogically and clinically sound processes for speech development in young children.

  6. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  7. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  8. Researching the acceptability of using Skype to provide Speech and Language Therapy

    Science.gov (United States)

    Matthews, Rebecca Alison; Woll, Bencie; Clarke, Mike

    2012-01-01

    In the current economic climate, whilst the demand for health services, including Speech and Language Therapy (SLT) continues to rise, there is pressure to reduce health service budgets, Tele-technology—the use of tele-communication technology to link patient and clinician remotely—could potentially provide a solution to meeting the demand for SLT with reduced resources. However, only a few SLT services in the United Kingdom (UK) have reported on using tele-technology to provide their service (Howell, Tripoliti and Pring, 2009; Styles, 2008; McCullough, 2001; Katsavarus, 2001). In 2002 the American Speech and Hearing Association (ASHA) surveyed its members on their experience and views of using tele-technology and specifically video-conferencing to provide an SLT service. The analysis of the responses identified five areas of concern—lack of professional guidelines, limited evidence of clinical efficacy, disruption and problems managing the technology, change in the interaction and loss of rapport as well as anticipated, additional costs to provide the service. The study reported here set up an SLT service using the desktop videoconferencing system, Skype, in an independent SLT practice based in the UK. Data were collected to evaluate the acceptability of the clinical sessions, the technology, the quality of interaction and costs of an SLT service using Skype. Eleven participants aged between 7 and 14 years with varying therapy needs took part. Each received a mix of face-to-face (F2F) and Skype SLT over the ten session trial period. Data were collected for every session using a report card; adults supporting the children were asked for their views using a questionnaire at the beginning and end of the trial; the child participants were interviewed after the trial period was over; one F2F and one Skype session was video recorded for each participant; work activity was recorded along with identifiable costs of F2F and Skype SLT sessions. A total of 110 session

  9. Letter-speech sound learning in children with dyslexia : From behavioral research to clinical practice

    NARCIS (Netherlands)

    Aravena, S.

    2017-01-01

    In alphabetic languages, learning to associate speech-sounds with unfamiliar characters is a critical step in becoming a proficient reader. This dissertation aimed at expanding our knowledge of this learning process and its relation to dyslexia, with an emphasis on bridging the gap between

  10. Researching Learner Self-Efficacy and Online Participation through Speech Functions: An Exploratory Study

    Science.gov (United States)

    Sánchez-Castro, Olga; Strambi, Antonella

    2017-01-01

    This study explores the potential contribution of Eggins and Slade's (2004) Speech Functions as tools for describing learners' participation patterns in Synchronous Computer-Mediated Communication (SCMC). Our analysis focuses on the relationship between learners' self-efficacy (i.e. personal judgments of second language performance capabilities)…

  11. Basic to Applied Research: The Benefits of Audio-Visual Speech Perception Research in Teaching Foreign Languages

    Science.gov (United States)

    Erdener, Dogu

    2016-01-01

    Traditionally, second language (L2) instruction has emphasised auditory-based instruction methods. However, this approach is restrictive in the sense that speech perception by humans is not just an auditory phenomenon but a multimodal one, and specifically, a visual one as well. In the past decade, experimental studies have shown that the…

  12. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  13. Perception of spokespersons' performance and characteristics in crisis communication: experience of the 2003 severe acute respiratory syndrome outbreak in Taiwan.

    Science.gov (United States)

    Lyu, Shu-Yu; Chen, Ruey-Yu; Wang, Shih-fan Steve; Weng, Ya-Ling; Peng, Eugene Yu-Chang; Lee, Ming-Been

    2013-10-01

    To explore perception of spokespersons' performance and characteristics in response to the 2003 severe acute respiratory syndrome (SARS) outbreak. This study was conducted from March to July, 2005, using semi-structured in-depth interviews to collect data. All interviews were audio-recorded and transcribed verbatim. A qualitative content analysis was employed to analyze the transcribed data. Interviewees included media reporters, media supervisors, health and medical institution executives or spokespersons, and social observers. Altogether, 35 interviewees were recruited for in-depth interviews, and the duration of the interview ranged from 1 hour to 2 hours. Results revealed that the most important characteristics of health/medical institutions spokespersons are professional competence and good interaction with the media. In contrast, the most important behaviors they should avoid are concealing the truth and misreporting the truth. Three major flaws of spokespersons' performance were identified: they included poor understanding of media needs and landscape; blaming the media to cover up a mistake they made in an announcement; and lack of sufficient participation in decision-making or of authorization from the head of organization. Spokespersons of health and medical institutions play an important role in media relations during the crisis of a newly emerging infectious disease. Copyright © 2013. Published by Elsevier B.V.

  14. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  15. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  16. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  17. Preregistration research training of speech and language therapists in the United Kingdom: a nationwide audit of quantity, content and delivery.

    Science.gov (United States)

    Pagnamenta, Emma; Joffe, Victoria L

    2018-04-24

    To carry out an audit of the quantity and content of research teaching on UK preregistration speech and language therapy (SLT) degree programmes. Lecturers delivering research teaching from each higher education institution providing preregistration training were invited to complete an online survey. Amount of research teaching, content of research teaching (including final-year projects), perceived confidence by staff of graduates in research awareness, research activity and leading research. Responses were received for 14 programmes (10 undergraduate and four postgraduate), representing 73% of all undergraduate courses and 44% of all postgraduate courses in the United Kingdom. Fifty percent of courses included over 30 h of research teaching, with wide variability across both undergraduate and postgraduate courses in number of hours, modules and credits devoted to research. There was no association between quantity of research teaching and perception of adequacy of quantity of teaching. Critical appraisal, statistical software and finding literature were the most common topics taught. Conversely, service evaluation and audit was the least common topic covered. All institutions provided a final-year project, with 11/14 requiring empirical research. Perceived confidence of graduates was higher for research awareness than active research and leading research, but this varied across institutions. There was a strong correlation between lecturers' perceived confidence of graduates in research awareness and number of hours of research teaching. Despite the requirements for healthcare professionals to engage in evidence-based practice, the amount and nature of research training in preregistration courses for SLTs in the United Kingdom is highly variable. Levels of perceived confidence of graduates were also variable, not only for active participation in research, and for leading research, but also for research awareness. This has implications for the ability of SLTs to

  18. Speech Processing.

    Science.gov (United States)

    1983-05-01

    The VDE system developed had the capability of recognizing up to 248 separate words in syntactic structures. 4 The two systems described are isolated...AND SPEAKER RECOGNITION by M.J.Hunt 5 ASSESSMENT OF SPEECH SYSTEMS ’ ..- * . by R.K.Moore 6 A SURVEY OF CURRENT EQUIPMENT AND RESEARCH’ by J.S.Bridle...TECHNOLOGY IN NAVY TRAINING SYSTEMS by R.Breaux, M.Blind and R.Lynchard 10 9 I-I GENERAL REVIEW OF MILITARY APPLICATIONS OF VOICE PROCESSING DR. BRUNO

  19. Status Report on Speech Research. A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    Science.gov (United States)

    1983-09-30

    Iberall, A. S. (1978). Cybernetics offers a (hydrodynamic) thermodynamic view of brain activities. An alternative to reflexology . In F. Brambilla, P. K...spontaneous speech that have the effect of equalizing the number of syllables per foot , and thus making the speaker’s output more isochronous

  20. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  1. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  2. Related Services Research for Students With Low-Incidence Disabilities: Implications for Speech-Language Pathologists in Inclusive Classrooms.

    Science.gov (United States)

    Giangreco, Michael F

    2000-07-01

    When speech-language pathologists provide educationally related services for students with lowincidence disabilities who are placed in inclusive classrooms, they are asked to work with a variety of other adults. The ways in which these adults make decisions about individualizing a student's educational program, determine related services, and coordinate their activities have an impact on educational outcomes for students as well as on interprofessional interactions. This article summarizes a team process for making related services decisions called VISTA (Vermont Interdependent Services Team Approach) and a series of nine research studies pertaining to the use and impact of VISTA. It also addresses related topics, such as team size, consumer perspectives, and paraprofessional supports. Five major implications from these studies are offered concerning (a) developing a disposition of being an ongoing learner, (b) developing a shared framework among team members, (c) having a research-based process to build consensus, (d) clarifying roles, and (e) increasing involvement of families and general education teachers.

  3. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  4. Status Report on Speech Research. A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    Science.gov (United States)

    1985-10-01

    Kay Arlyne Russo Sara Basson Noriko Kobayashi Richard C. Schmidt "..’ Eric Bateson Rena A. Krakow John Scholz ... . Suzanne Boyce Deborah Kuglitsch...greater accuracy than isolated vowels ( Gottfried & Strange, 1980; Rakerd et al., 1984; Strange, Edman, & Jenkins, 1976; Strange, Verbrugge, Shankweiler...Perceptual structure of monophthongs and diphthongs in "" English. Language and Speech, 26, 21-59. Gottfried , T. L., & Strange, W. (1980

  5. Research Staff | Bioenergy | NREL

    Science.gov (United States)

    Research Staff Research Staff Photo of Adam Bratis, Ph.D. Adam Bratis Associate Lab Director-Bio research to accomplish the objectives of the Department of Energy's Bioenergy Technologies Office, and to serve as a spokesperson for the bioenergy research effort at NREL, both internally and externally. This

  6. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  7. APPRECIATING SPEECH THROUGH GAMING

    Directory of Open Access Journals (Sweden)

    Mario T Carreon

    2014-06-01

    Full Text Available This paper discusses the Speech and Phoneme Recognition as an Educational Aid for the Deaf and Hearing Impaired (SPREAD application and the ongoing research on its deployment as a tool for motivating deaf and hearing impaired students to learn and appreciate speech. This application uses the Sphinx-4 voice recognition system to analyze the vocalization of the student and provide prompt feedback on their pronunciation. The packaging of the application as an interactive game aims to provide additional motivation for the deaf and hearing impaired student through visual motivation for them to learn and appreciate speech.

  8. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  9. Senior Senator from Florida and Chairman, Senate Committee on Space, Aeronautics and Related Sciences W. Nelson, visiting the ATLAS cavern and LHC tunnel with ATLAS Collaboration Spokesperson P. Jenni and AMS Collaboration Spokesperson S.C.C.Ting, 16 March 2008.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    Senior Senator from Florida and Chairman, Senate Committee on Space, Aeronautics and Related Sciences W. Nelson, visiting the ATLAS cavern and LHC tunnel with ATLAS Collaboration Spokesperson P. Jenni and AMS Collaboration Spokesperson S.C.C.Ting, 16 March 2008.

  10. 21 June 2010 - TUBITAK Vice President A. Adli signing the guest book with CERN Director-General R. Heuer, visiting the ATLAS control room at Point 1 with Former Collaboration Spokesperson P. Jenni and CMS Control Centre, building 354, with Collaboration Spokesperson G. Tonelli. Throughout accompanied by Adviser J. Ellis.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    21 June 2010 - TUBITAK Vice President A. Adli signing the guest book with CERN Director-General R. Heuer, visiting the ATLAS control room at Point 1 with Former Collaboration Spokesperson P. Jenni and CMS Control Centre, building 354, with Collaboration Spokesperson G. Tonelli. Throughout accompanied by Adviser J. Ellis.

  11. 12 December 2012 - US NSF Physics Division Acting Director D. Caldwell signing the guest book with Adviser for the US R. Voss and Head of International Relations F. Pauss; CMS Collaboration Spokesperson J. Incandela and ATLAS Deputy Spokesperson A. Lankford present.

    CERN Multimedia

    Samuel Morier-Genoud

    2012-01-01

    12 December 2012 - US NSF Physics Division Acting Director D. Caldwell signing the guest book with Adviser for the US R. Voss and Head of International Relations F. Pauss; CMS Collaboration Spokesperson J. Incandela and ATLAS Deputy Spokesperson A. Lankford present.

  12. 23rd May 2011 - University of Liverpool Pro-Vice-Chancellor and Public Orator K. Everest (UK) Mrs Everest in the ATLAS visitor centre with Collaboration Deputy Spokesperson D. Charlton, in LHCb surface building with Collaboration Spokesperson A. Golutvin, accompanied throughout by P. Wells and Liverpool University T. Bowcock and M. Klein.

    CERN Multimedia

    Maximilen Brice

    2011-01-01

    23rd May 2011 - University of Liverpool Pro-Vice-Chancellor and Public Orator K. Everest (UK) Mrs Everest in the ATLAS visitor centre with Collaboration Deputy Spokesperson D. Charlton, in LHCb surface building with Collaboration Spokesperson A. Golutvin, accompanied throughout by P. Wells and Liverpool University T. Bowcock and M. Klein.

  13. Speech spectrum envelope modeling

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Vondra, Martin

    Vol. 4775, - (2007), s. 129-137 ISSN 0302-9743. [COST Action 2102 International Workshop. Vietri sul Mare, 29.03.2007-31.03.2007] R&D Projects: GA AV ČR(CZ) 1ET301710509 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech * speech processing * cepstral analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.302, year: 2005

  14. Performance Assessment of Dynaspeak Speech Recognition System on Inflight Databases

    National Research Council Canada - National Science Library

    Barry, Timothy

    2004-01-01

    .... To aid in the assessment of various commercially available speech recognition systems, several aircraft speech databases have been developed at the Air Force Research Laboratory's Human Effectiveness Directorate...

  15. Commercial speech and off-label drug uses: what role for wide acceptance, general recognition and research incentives?

    Science.gov (United States)

    Gilhooley, Margaret

    2011-01-01

    approval. Distributions of information about unapproved uses should not be acceptable unless experts consider the expanded use to be generally recognized as safe and effective based on adequate studies. The last part of this paper considers the need to develop better research incentives to encourage more testing and post-market risk surveillance by drug makers on off-label uses of their drugs. Violations of the Federal Food Drug and Cosmetic Act (FFDCA) can be considered violations of the False Claims Act, which opens the way to fraud and abuse suits. The scale of penalties involved in these suits may lead to more examination of the scope of FDA regulation and commercial speech protections. Thus this symposium's consideration of these issues is timely and important.

  16. Authenticity in Obesity Public Service Announcements: Influence of Spokesperson Type, Viewer Weight, and Source Credibility on Diet, Exercise, Information Seeking, and Electronic Word-of-Mouth Intentions.

    Science.gov (United States)

    Phua, Joe; Tinkham, Spencer

    2016-01-01

    This study examined the joint influence of spokesperson type in obesity public service announcements (PSAs) and viewer weight on diet intention, exercise intention, information seeking, and electronic word-of-mouth (eWoM) intention. Results of a 2 (spokesperson type: real person vs. actor) × 2 (viewer weight: overweight vs. non-overweight) between-subjects experiment indicated that overweight viewers who saw the PSA featuring the real person had the highest diet intention, exercise intention, information seeking, and eWoM intention. Parasocial interaction was also found to mediate the relationships between spokesperson type/viewer weight and two of the dependent variables: diet intention and exercise intention. In addition, viewers who saw the PSA featuring the real person rated the spokesperson as significantly higher on source credibility (trustworthiness, competence, and goodwill) than those who saw the PSA featuring the actor.

  17. 8 July 2011 - Kingdom of Lesotho Minister of Education and Training M. Khaketla in the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    The delegation included Motsoakapa Makara, principal secretary for the ministry of education and training, Mefane Lintle, Lesotho delegate, and Moshe Anthony Maruping, Lesotho ambassador, visited the ATLAS visitor centre with Peter Jenni, former ATLAS spokesperson.

  18. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  19. 23 May 2012 - Egypt Minister of Scientific Research N. Eskandar Zakhary signing the guest book with CERN Director-General R. Heuer; visiting the CMS control centre with Collaboration Deputy Spokesperson T. Camporesi and International Relations Office Adviser P. Fassnacht. Ambassador to the UN H. Badr present with young scientists M. Attia, S. Seif El Nasr and R. Wasef.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    Photo CERN-HI-1205103 15: from left to right: Ambassador to the UN H. Badr; M. Attia; R. Wasef; Minister of Scientific Research N. Eskandar Zakhary; S. Seif El Nasr and President of the Scientific Research Academy M. El Sherbiny.

  20. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  1. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  2. Video Release: 47th Vice President of the United States Joseph R. Biden Jr. Speech at HUPO2017 Global Leadership Gala | Office of Cancer Clinical Proteomics Research

    Science.gov (United States)

    The Human Proteome Organization (HUPO) has released a video of the keynote speech given by the 47th Vice President of the United States of America Joseph R. Biden Jr. at the HUPO2017 Global Leadership Gala. Under the gala theme “International Cooperation in the Fight Against Cancer,” Biden recognized cancer as a collection of related diseases, the importance of data sharing and harmonization, and the need for collaboration across scientific disciplines as inflection points in cancer research.

  3. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  4. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  5. Visual and Auditory Input in Second-Language Speech Processing

    Science.gov (United States)

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  6. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  7. Contributions of speech science to the technology of man-machine voice interactions

    Science.gov (United States)

    Lea, Wayne A.

    1977-01-01

    Research in speech understanding was reviewed. Plans which include prosodics research, phonological rules for speech understanding systems, and continued interdisciplinary phonetics research are discussed. Improved acoustic phonetic analysis capabilities in speech recognizers are suggested.

  8. Speech to be delivered by Mr. François de Rose, president of Council of the european organization for nuclear research on the occasion of the inauguration of the CERN proton synchrotron on 5 february 1960

    CERN Multimedia

    CERN Press Office. Geneva

    1960-01-01

    Speech to be delivered by Mr. François de Rose, president of Council of the european organization for nuclear research on the occasion of the inauguration of the CERN proton synchrotron on 5 february 1960

  9. The chairman's speech

    International Nuclear Information System (INIS)

    Allen, A.M.

    1986-01-01

    The paper contains a transcript of a speech by the chairman of the UKAEA, to mark the publication of the 1985/6 annual report. The topics discussed in the speech include: the Chernobyl accident and its effect on public attitudes to nuclear power, management and disposal of radioactive waste, the operation of UKAEA as a trading fund, and the UKAEA development programmes. The development programmes include work on the following: fast reactor technology, thermal reactors, reactor safety, health and safety aspects of water cooled reactors, the Joint European Torus, and under-lying research. (U.K.)

  10. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  11. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  12. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  13. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  14. Speech Research: A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    Science.gov (United States)

    1981-03-01

    consider the vertical dimension, shown in Figure 7. In this plot, the lineup point--zero time--was the onset of voicing for the vowel. Implosion for...Review of Educational Research, 1973, 43, 115-137. Zakia, R. D., & Haber, R. N. Sequential letter and word recognition in deaf and hearing subjects...University of Stockholm, 1974, 26, 1-19. Lomas, J., & Kimura, D. Intrahemispheric interaction between speaking and sequential manual activity

  15. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  16. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  17. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  18. Effectiveness of radio spokesperson's gender, vocal pitch and accent and the use of music in radio advertising

    Directory of Open Access Journals (Sweden)

    Josefa D. Martín-Santana

    2015-07-01

    Full Text Available The aim of this study is to analyze how certain voice features of radio spokespersons and background music influence the advertising effectiveness of a radio spot from the cognitive, affective and conative perspectives. We used a 2 × 2 × 2 × 2 experimental design in 16 different radio programs in which an ad hoc radio spot was inserted during advertising block. This ad changed according to combinations of spokesperson's gender (male–female, vocal pitch (low–high and accent (local–standard. In addition to these independent factors, the effect of background music in advertisements was also tested and compared with those that only had words. 987 regular radio listeners comprised the sample that was exposed to the radio program we created. Based on the differences in the levels of effectiveness in the tested voice features, our results suggest that the choice of the voice in radio advertising is one of the most important decisions an advertiser faces. Furthermore, the findings show that the inclusion of music does not always imply greater effectiveness.

  19. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  20. 31st January 2011 - OECD Secretary-General A. Gurría visiting the ATLAS underground experimental area with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    CERN-HI-1101036 21. Former ATLAS Collaboration Spokesperson P. Jenni, Counsellor for Scientific Affairs S. Michalowski, Secretary General Chief of Staff G. Ramos, OECD Secretary-General A. Gurría, Relations with International Organisations M. Bona, Head of International Relations F. Pauss and Director M. Oborne, in the ATLAS cavern.

  1. 22 March 2012 - Canada Foundation for Innovation Senior Programs Officer H.-C. Bandulet with spouse in the ATLAS visitor centre guided by Former Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    CERN-HI-1203073 16: Senior Canadian Scientist, ATLAS Collaboration, University of Toronto/IPP R. Teuscher; L. Andrzejewski(Spouse); H.-C. Bandulet; R.Voss (behind);ATLAS Collaboration, University of Toronto N.Ilic; ;ATLAS Collaboration, University of Toronto, R. Rezvani; ATLAS Collaboration Former Spokesperson P. Jenni.

  2. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  3. Development of a System for Automatic Recognition of Speech

    Directory of Open Access Journals (Sweden)

    Roman Jarina

    2003-01-01

    Full Text Available The article gives a review of a research on processing and automatic recognition of speech signals (ARR at the Department of Telecommunications of the Faculty of Electrical Engineering, University of iilina. On-going research is oriented to speech parametrization using 2-dimensional cepstral analysis, and to an application of HMMs and neural networks for speech recognition in Slovak language. The article summarizes achieved results and outlines future orientation of our research in automatic speech recognition.

  4. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  5. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  6. Speech is Golden

    DEFF Research Database (Denmark)

    Juel Henrichsen, Peter

    2014-01-01

    on the supply side. The present article reports on a new public action strategy which has taken shape in the course of 2013-14. While Denmark is a small language area, our public sector is well organised and has considerable purchasing power. Across this past year, Danish local authorities have organised around......Most of the Danish municipalities are ready to begin to adopt automatic speech recognition, but at the same time remain nervous following a long series of bad business cases in the recent past. Complaints are voiced over costly licences and low service levels, typical effects of a de facto monopoly...... the speech technology challenge, they have formulated a number of joint questions and new requirements to be met by suppliers and have deliberately worked towards formulating tendering material which will allow fair competition. Public researchers have contributed to this work, including the author...

  7. Your Starting Guide To Childhood Apraxia of Speech

    Science.gov (United States)

    ... To Help Ways to Give Be the Voice Corporate Sponsorship Shop to Help Events Calendar Educational Events ... including evaluation, speech therapy, research and other childhood communication topics. Invaluable for parents, speech language pathologists, teachers ...

  8. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  9. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  10. LinguaTag: an Emotional Speech Analysis Application

    OpenAIRE

    Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros

    2008-01-01

    The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional...

  11. Ultrasound applicability in Speech Language Pathology and Audiology

    OpenAIRE

    Barberena,Luciana da Silva; Brasil,Brunah de Castro; Melo,Roberta Michelon; Mezzomo,Carolina Lisbôa; Mota,Helena Bolli; Keske-Soares,Márcia

    2014-01-01

    PURPOSE: To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. RESEARCH STRATEGY: A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and...

  12. Ultrasound applicability in Speech Language Pathology and Audiology

    OpenAIRE

    Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia

    2014-01-01

    PURPOSE: To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. RESEARCH STRATEGY: A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Patholog...

  13. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  14. 4 July 2013- European Commission DG CONNECT Director-General R. Madelin, signing the guest book with CERN Director-General R. Heuer and visiting CMS experimental area with Collaboration Deputy Spokesperson J. Varela.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    4 July 2013- European Commission DG CONNECT Director-General R. Madelin, signing the guest book with CERN Director-General R. Heuer and visiting CMS experimental area with Collaboration Deputy Spokesperson J. Varela.

  15. 12th September 2011 - Undersecretary for Foreign Affairs F. Schmidt Ariztía in the ATLAS visitor centre with ATLAS Collaboration Former Spokesperson P. Jenni, Adviser for Chile J. Salicio Diez and Senior Physicist J. Mikenberg.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    12th September 2011 - Undersecretary for Foreign Affairs F. Schmidt Ariztía in the ATLAS visitor centre with ATLAS Collaboration Former Spokesperson P. Jenni, Adviser for Chile J. Salicio Diez and Senior Physicist J. Mikenberg.

  16. 7 February 2012 - Signature of the Memorandum of Understanding between Suranaree University of Technology represented by Rector P. Suebka and the ALICE Collaboration represented by Collaboration Spokesperson P. Giubellino; Adviser E. Tsesmelis is present.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    7 February 2012 - Signature of the Memorandum of Understanding between Suranaree University of Technology represented by Rector P. Suebka and the ALICE Collaboration represented by Collaboration Spokesperson P. Giubellino; Adviser E. Tsesmelis is present.

  17. 19 September 2012 - Indonesian Members of Parliament visiting the CMS control room and experimental cavern at Point 5 with Former Deputy Spokesperson A. De Roeck and International Relations Adviser E. Tsesmelis.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    19 September 2012 - Indonesian Members of Parliament visiting the CMS control room and experimental cavern at Point 5 with Former Deputy Spokesperson A. De Roeck and International Relations Adviser E. Tsesmelis.

  18. 27 September 2013 -Lithuanian Minister of Culture Š. Birutis in the LHC tunnel with International Relations Adviser T. Kurtyka and visiting CMS experimental area with Deputy Spokesperson T. Camporesi. Also present: V. Rapsevicius, CMS Collaboration.

    CERN Multimedia

    Laurent Egli

    2013-01-01

    27 September 2013 -Lithuanian Minister of Culture Š. Birutis in the LHC tunnel with International Relations Adviser T. Kurtyka and visiting CMS experimental area with Deputy Spokesperson T. Camporesi. Also present: V. Rapsevicius, CMS Collaboration.

  19. 4th February 2011- Polish Ambassador to the United Nations Office R. A. Henczel visiting CMS control room and underground experimental area with his daughter, guided by Collaboration Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    4th February 2011- Polish Ambassador to the United Nations Office R. A. Henczel visiting CMS control room and underground experimental area with his daughter, guided by Collaboration Spokesperson G. Tonelli.

  20. 9 August 2011 - United Nations High Commissioner for Human Rights N. Pillay signing the guest book with Head of International Relations F. Pauss; in the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    9 August 2011 - United Nations High Commissioner for Human Rights N. Pillay signing the guest book with Head of International Relations F. Pauss; in the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni.

  1. National Science Foundation Assistant Director for Mathematics and Physical Sciences Tony Chan (USA) visiting LHCb experiment on 23rd May 2007 with Spokesperson T. Nakada, Advisor to CERN Director-General J. Ellis and I. Belyaev of Syracuse

    CERN Multimedia

    Maximilien Brice

    2007-01-01

    National Science Foundation Assistant Director for Mathematics and Physical Sciences Tony Chan (USA) visiting LHCb experiment on 23rd May 2007 with Spokesperson T. Nakada, Advisor to CERN Director-General J. Ellis and I. Belyaev of Syracuse

  2. 09 September 2013 - Japanese Members of Internal Affairs and Communications Committee House of Representatives visiting the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton. T. Kondo and K. Yoshida present.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    09 September 2013 - Japanese Members of Internal Affairs and Communications Committee House of Representatives visiting the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton. T. Kondo and K. Yoshida present.

  3. Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) representative H. Ikukawa visiting ATLAS experiment with Collaboration Spokesperson P. Jenni, KEK representative T. Kondo and Advisor to CERN DG J. Ellis on 15 May 2007.

    CERN Multimedia

    Maximilien Brice

    2007-01-01

    Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) representative H. Ikukawa visiting ATLAS experiment with Collaboration Spokesperson P. Jenni, KEK representative T. Kondo and Advisor to CERN DG J. Ellis on 15 May 2007.

  4. 18th May 2011 - Chinese State Administration of Foreign Experts Affairs (SAFEA) Deputy Director-General M. LU (State Council of China) in the ATLAS visitors centre with Collaboration Deputy Spokesperson A. Lankford and Collaboration member Z. Ren.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    18th May 2011 - Chinese State Administration of Foreign Experts Affairs (SAFEA) Deputy Director-General M. LU (State Council of China) in the ATLAS visitors centre with Collaboration Deputy Spokesperson A. Lankford and Collaboration member Z. Ren.

  5. 11th October 2011 - Chinese University of Science and Technology President J. Hou signing the guest book with Adviser R. Voss; in the ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni and Members of the ATLAS Chinese Collaboration.

    CERN Multimedia

    2011-01-01

    11th October 2011 - Chinese University of Science and Technology President J. Hou signing the guest book with Adviser R. Voss; in the ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni and Members of the ATLAS Chinese Collaboration.

  6. 5th August 2008 - British Secretary of State for Innovation, Universities and Skills J. Denham MP visiting LHCb experimental area with Collaboration Spokesperson A. Golutvin and users T. Bowcock and U. Egede.

    CERN Multimedia

    Claudia Marcelloni

    2008-01-01

    5th August 2008 - British Secretary of State for Innovation, Universities and Skills J. Denham MP visiting LHCb experimental area with Collaboration Spokesperson A. Golutvin and users T. Bowcock and U. Egede.

  7. 15 April 2008 - British Minister for Science and Innovation I. Pearson MP visiting the ATLAS cavern with Adviser to CERN Director-General J. Ellis, Ambassador to Switzerland S. Featherstone and Collaboration Spokesperson P. Jenni

    CERN Multimedia

    Claudia Marcelloni

    2008-01-01

    15 April 2008 - British Minister for Science and Innovation I. Pearson MP visiting the ATLAS cavern with Adviser to CERN Director-General J. Ellis, Ambassador to Switzerland S. Featherstone and Collaboration Spokesperson P. Jenni

  8. 11 March 2010 - Ambassador of Canada to Switzerland and to Liechtenstein R. Santi in the ATLAS visitor centre with Collaboration Deputy Spokesperson A. Lankford and signing the guest book with CERN Director-General R. Heuer.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    11 March 2010 - Ambassador of Canada to Switzerland and to Liechtenstein R. Santi in the ATLAS visitor centre with Collaboration Deputy Spokesperson A. Lankford and signing the guest book with CERN Director-General R. Heuer.

  9. 14 December 2011 - Czech Republic Delegation to CERN Council and Finance Committees visiting ATLAS experimental area, LHC tunnel and ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni, accompanied by Physicist R. Leitner and Swiss student A. Lister.

    CERN Multimedia

    Estelle Spirig

    2011-01-01

    14 December 2011 - Czech Republic Delegation to CERN Council and Finance Committees visiting ATLAS experimental area, LHC tunnel and ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni, accompanied by Physicist R. Leitner and Swiss student A. Lister.

  10. 12 April 2013 - The British Royal Academy of Engineering visiting the LHC superconducting magnet test hall with R. Veness and the ATLAS experimental cavern with Collaboration Spokesperson D. Charlton.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    12 April 2013 - The British Royal Academy of Engineering visiting the LHC superconducting magnet test hall with R. Veness and the ATLAS experimental cavern with Collaboration Spokesperson D. Charlton.

  11. 29 January 2013 - Japanese Toshiba Corporation Executive Officer and Corporate Senior Vice President O. Maekawa in the ATLAS visitor centre with representatives of the CERN-Japanese community led by Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    29 January 2013 - Japanese Toshiba Corporation Executive Officer and Corporate Senior Vice President O. Maekawa in the ATLAS visitor centre with representatives of the CERN-Japanese community led by Former Collaboration Spokesperson P. Jenni.

  12. 24 January 2011 - President of the Deutsche Forschungsgemeinschaft M. Kleiner in the ATLAS visitor centre and underground experimental area with Former Spokesperson P. Jenni, accompanied by P. Mättig and Adviser R. Voss.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    24 January 2011 - President of the Deutsche Forschungsgemeinschaft M. Kleiner in the ATLAS visitor centre and underground experimental area with Former Spokesperson P. Jenni, accompanied by P. Mättig and Adviser R. Voss.

  13. 「言い誤り」(speech errors)の傾向に関する考察(IV)

    OpenAIRE

    伊藤, 克敏; Ito, Katsutoshi

    2007-01-01

    This is the fourth in a series (1988, 1992, 1999) of my research on the tendencies of speech errors committed by adults. Collected speech errors were analyzed on phonological, morphological, syntactic and semantic levels. Similarities and differences between adult and child speech errors were discussed. It was pointed out that the typology of speech errors can be established by comparative study of adult speech errors, developing child language, aphasic speech and speech of senile dementia.

  14. Speech intelligibility of laryngectomized patients who use different types of vocal communication

    OpenAIRE

    Šehović Ivana; Petrović-Lazić Mirjana

    2016-01-01

    Modern methods of speech rehabilitation after a total laryngectomy have come to a great success by giving the patients a possibility to establish an intelligible and functional speech after an adequate rehabilitation treatment. The aim of this paper was to examine speech intelligibility of laryngectomized patients who use different types of vocal communication: esophageal speech, speech with tracheoesophageal prosthesis and speech with electronic laringeal prosthesis. The research was conduct...

  15. Utility of TMS to understand the neurobiology of speech

    Directory of Open Access Journals (Sweden)

    Takenobu eMurakami

    2013-07-01

    Full Text Available According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet.Transcranial magnetic stimulation (TMS has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation.Single-pulse TMS of the primary motor cortex (M1 demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading. A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech.In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  16. Hypnosis and the Reduction of Speech Anxiety.

    Science.gov (United States)

    Barker, Larry L.; And Others

    The purposes of this paper are (1) to review the background and nature of hypnosis, (2) to synthesize research on hypnosis related to speech communication, and (3) to delineate and compare two potential techniques for reducing speech anxiety--hypnosis and systematic desensitization. Hypnosis has been defined as a mental state characterised by…

  17. Pulmonic Ingressive Speech in Shetland English

    Science.gov (United States)

    Sundkvist, Peter

    2012-01-01

    This paper presents a study of pulmonic ingressive speech, a severely understudied phenomenon within varieties of English. While ingressive speech has been reported for several parts of the British Isles, New England, and eastern Canada, thus far Newfoundland appears to be the only locality where researchers have managed to provide substantial…

  18. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  19. 18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

    CERN Multimedia

    Samuel Morier-Genoud

    2012-01-01

    18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

  20. Research Paper: Production of A Protocol on Early Intervention for Speech and Language Delays in Early Childhood: An Novice Experience in Iran

    Directory of Open Access Journals (Sweden)

    Roshanak Vameghi

    2016-01-01

    Results The result of this study is presented as 7 intervention packages, including the following domains of disorders: prelingual lingual speech and language hearing impairment, speech sound, dysphagia, stuttering, and dysarthria  Conclusion Most studies have confirmed the effectiveness and need for early interventions for children with speech and language impairment. However, most do not explain the details of these interventions. Before the present study, no systematic and evidence-based protocol existed for early intervention in childhood speech and language impairments, in Iran; and due to language differences, as well as possible differences in the speech and language developmental process of children of different communities, making direct use of non-Persian references was not possible and effective. Thus, there was a clear demand for the production of such a protocol.

  1. Oral Articulatory Control in Childhood Apraxia of Speech

    Science.gov (United States)

    Grigos, Maria I.; Moss, Aviva; Lu, Ying

    2015-01-01

    Purpose: The purpose of this research was to examine spatial and temporal aspects of articulatory control in children with childhood apraxia of speech (CAS), children with speech delay characterized by an articulation/phonological impairment (SD), and controls with typical development (TD) during speech tasks that increased in word length. Method:…

  2. Teaching Speech Acts

    Directory of Open Access Journals (Sweden)

    Teaching Speech Acts

    2007-01-01

    Full Text Available In this paper I argue that pragmatic ability must become part of what we teach in the classroom if we are to realize the goals of communicative competence for our students. I review the research on pragmatics, especially those articles that point to the effectiveness of teaching pragmatics in an explicit manner, and those that posit methods for teaching. I also note two areas of scholarship that address classroom needs—the use of authentic data and appropriate assessment tools. The essay concludes with a summary of my own experience teaching speech acts in an advanced-level Portuguese class.

  3. Particularities of Speech Readiness for Schooling in Pre-School Children Having General Speech Underdevelopment: A Social and Pedagogical Aspect

    Science.gov (United States)

    Emelyanova, Irina A.; Borisova, Elena A.; Shapovalova, Olga E.; Karynbaeva, Olga V.; Vorotilkina, Irina M.

    2018-01-01

    The relevance of the research is due to the necessity of creating the pedagogical conditions for correction and development of speech in children having the general speech underdevelopment. For them, difficulties generating a coherent utterance are characteristic, which prevents a sufficient speech readiness for schooling forming in them as well…

  4. The Influence of Direct and Indirect Speech on Source Memory

    Directory of Open Access Journals (Sweden)

    Anita Eerland

    2018-02-01

    Full Text Available People perceive the same situation described in direct speech (e.g., John said, “I like the food at this restaurant” as more vivid and perceptually engaging than described in indirect speech (e.g., John said that he likes the food at the restaurant. So, if direct speech enhances the perception of vividness relative to indirect speech, what are the effects of using indirect speech? In four experiments, we examined whether the use of direct and indirect speech influences the comprehender’s memory for the identity of the speaker. Participants read a direct or an indirect speech version of a story and then addressed statements to one of the four protagonists of the story in a memory task. We found better source memory at the level of protagonist gender after indirect than direct speech (Exp. 1–3. When the story was rewritten to make the protagonists more distinctive, we also found an effect of speech type on source memory at the level of the individual, with better memory after indirect than direct speech (Exp. 3–4. Memory for the content of the story, however, was not influenced by speech type (Exp. 4. While previous research showed that direct speech may enhance memory for how something was said, we conclude that indirect speech enhances memory for who said what.

  5. Speech neglect: A strange educational blind spot

    Science.gov (United States)

    Harris, Katherine Safford

    2005-09-01

    Speaking is universally acknowledged as an important human talent, yet as a topic of educated common knowledge, it is peculiarly neglected. Partly, this is a consequence of the relatively recent growth of research on speech perception, production, and development, but also a function of the way that information is sliced up by undergraduate colleges. Although the basic acoustic mechanism of vowel production was known to Helmholtz, the ability to view speech production as a physiological event is evolving even now with such techniques as fMRI. Intensive research on speech perception emerged only in the early 1930s as Fletcher and the engineers at Bell Telephone Laboratories developed the transmission of speech over telephone lines. The study of speech development was revolutionized by the papers of Eimas and his colleagues on speech perception in infants in the 1970s. Dissemination of knowledge in these fields is the responsibility of no single academic discipline. It forms a center for two departments, Linguistics, and Speech and Hearing, but in the former, there is a heavy emphasis on other aspects of language than speech and, in the latter, a focus on clinical practice. For psychologists, it is a rather minor component of a very diverse assembly of topics. I will focus on these three fields in proposing possible remedies.

  6. Speech and Communication Disorders

    Science.gov (United States)

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  7. Recognizing intentions in infant-directed speech: evidence for universals.

    Science.gov (United States)

    Bryant, Gregory A; Barrett, H Clark

    2007-08-01

    In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.

  8. Prosodic Contrasts in Ironic Speech

    Science.gov (United States)

    Bryant, Gregory A.

    2010-01-01

    Prosodic features in spontaneous speech help disambiguate implied meaning not explicit in linguistic surface structure, but little research has examined how these signals manifest themselves in real conversations. Spontaneously produced verbal irony utterances generated between familiar speakers in conversational dyads were acoustically analyzed…

  9. Free Speech Yearbook 1978.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The 17 articles in this collection deal with theoretical and practical freedom of speech issues. The topics include: freedom of speech in Marquette Park, Illinois; Nazis in Skokie, Illinois; freedom of expression in the Confederate States of America; Robert M. LaFollette's arguments for free speech and the rights of Congress; the United States…

  10. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  11. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  12. Using Others' Words: Conversational Use of Reported Speech by Individuals with Aphasia and Their Communication Partners.

    Science.gov (United States)

    Hengst, Julie A.; Frame, Simone R.; Neuman-Stritzel, Tiffany; Gannaway, Rachel

    2005-01-01

    Reported speech, wherein one quotes or paraphrases the speech of another, has been studied extensively as a set of linguistic and discourse practices. Researchers agree that reported speech is pervasive, found across languages, and used in diverse contexts. However, to date, there have been no studies of the use of reported speech among…

  13. Freedom of Speech: A Clear and Present Need to Teach. ERIC Report.

    Science.gov (United States)

    Boileau, Don M.

    1983-01-01

    Presents annotations of 21 documents in the ERIC system on the following subjects: (1) theory of freedom of speech; (2) theorists; (3) research on freedom of speech; (4) broadcasting and freedom of speech; and (5) international questions of freedom of speech. (PD)

  14. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  15. Personality in speech assessment and automatic classification

    CERN Document Server

    Polzehl, Tim

    2015-01-01

    This work combines interdisciplinary knowledge and experience from research fields of psychology, linguistics, audio-processing, machine learning, and computer science. The work systematically explores a novel research topic devoted to automated modeling of personality expression from speech. For this aim, it introduces a novel personality assessment questionnaire and presents the results of extensive labeling sessions to annotate the speech data with personality assessments. It provides estimates of the Big 5 personality traits, i.e. openness, conscientiousness, extroversion, agreeableness, and neuroticism. Based on a database built on the questionnaire, the book presents models to tell apart different personality types or classes from speech automatically.

  16. Psychophysics of Complex Auditory and Speech Stimuli

    National Research Council Canada - National Science Library

    Pastore, Richard

    1996-01-01

    The supported research provides a careful examination of the many different interrelated factors, processes, and constructs important to the perception by humans of complex acoustic signals, including speech and music...

  17. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  18. Automatic speech recognition used for evaluation of text-to-speech systems

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Nouza, J.; Vondra, Martin

    -, č. 5042 (2008), s. 136-148 ISSN 0302-9743 R&D Projects: GA AV ČR 1ET301710509; GA AV ČR 1QS108040569 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech recognition * speech processing Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  19. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. On-device mobile speech recognition

    OpenAIRE

    Mustafa, MK

    2016-01-01

    Despite many years of research, Speech Recognition remains an active area of research in Artificial Intelligence. Currently, the most common commercial application of this technology on mobile devices uses a wireless client – server approach to meet the computational and memory demands of the speech recognition process. Unfortunately, such an approach is unlikely to remain viable when fully applied over the approximately 7.22 Billion mobile phones currently in circulation. In this thesis we p...

  1. The development of speech production in children with cleft palate

    DEFF Research Database (Denmark)

    Willadsen, Elisabeth; Chapman, Kathy

    2012-01-01

    The purpose of this chapter is to provide an overview of speech development of children with cleft palate +/- cleft lip. The chapter will begin with a discussion of the impact of clefting on speech. Next, we will provide a brief description of those factors impacting speech development...... for this population of children. Finally, research examining various aspects of speech development of infants and young children with cleft palate (birth to age five) will be reviewed. This final section will be organized by typical stages of speech sound development (e.g., prespeech, the early word stage...

  2. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  3. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    Full Text Available In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively

  4. Closing speech

    NARCIS (Netherlands)

    Mengelers, J.H.J.

    2015-01-01

    HTSC Kick-Off Symposium. In order to meet today’s societal challenges, there is a strong demand for a breakthrough in technological and social innovation. HTSC is the university's response to the increasing demand from industry for fundamental research and development in the area of high-tech

  5. Pesquisa com os gêneros do discurso na sala de aula: resultados iniciais = Research on speech genres in the classroom: initial results

    Directory of Open Access Journals (Sweden)

    Rosângela Hammes Rodrigues

    2008-07-01

    Full Text Available Objetiva-se apresentar trabalhos do grupo de pesquisa Os gêneros do discurso: práticas pedagógicas e análise de gêneros na área de elaboração didática (ED dos gêneros e resultados gerais de pesquisa. Alguns trabalhos envolveram os gêneros crônica, carta do leitor e artigo assinado, bem como gêneros da música popular brasileira; outrocorrelacionou gêneros com produção, correção e avaliação de textos. Destacam-se como resultados de pesquisa: construção do conhecimento procedimental por meio do estudo e produção de textos do gênero; articulação das práticas de escuta, leitura, produção textual eanálise linguística; necessidade de conhecimento de referência sobre os gêneros; inviabilidade de aprendizagem de todas as características de um determinado gênero em uma única ED, pois cada ED permite o aprofundamento de algumas delas; a ED da produção textual conduz a correção e a avaliação de textos; o gênero integra práticas deleitura, escuta, produção textual e análise linguística; dificuldade de trabalho com gêneros multimodais.It presents the works developed by the research group Speech Genres: Pedagogical Practices and Genre Analysis, concerning the didactic elaboration (DE of genres and the general results of the research. The works were based on the genres chronicle, letter to the editor, and article, as well as genres of Brazilian popular music; another research correlated genres with text production, correction and evaluation. Some highlights of the research results: the procedural knowledge construction throughout the textual studies of genres; thearticulation of listening/reading practices, textual production and linguistic analysis; the necessity of a referential knowledge of genres; the unfeasibility of learning all the characteristics of a genre in only one DE, as each DE leads to the improvement of some ofthem; the DE of the textual production guides the evaluation and the correction of

  6. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  7. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  8. Research

    African Journals Online (AJOL)

    abp

    2012-05-30

    May 30, 2012 ... cleft care supporting teams, especially speech language pathologists, orthodontists, and audiologists. ... social workers, speech therapists, dental technologists, and medical students. .... autonomy of local professionals.

  9. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  10. THE MEANING OF THE PREVENTION WITH SPEECH THERAPY AS A IMPORTANT FAC-TOR FOR THE PROPER DEVELOPMENT OF THE CHILDREN SPEECH

    Directory of Open Access Journals (Sweden)

    S. FILIPOVA

    1999-11-01

    Full Text Available The paper presented some conscientious and results from the finished research which showing the meaning of the prevention with speech therapy in the development of the speech. The research was done at Negotino and with that are shown the most frequent speech deficiency of the children at preschool age.

  11. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  12. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  13. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  14. Generating Expressive Speech for Storytelling Applications

    OpenAIRE

    Bailly, G.; Theune, Mariet; Meijs, Koen; Campbell, N.; Hamza, W.; Heylen, Dirk K.J.; Ordelman, Roeland J.F.; Hoge, H.; Jianhua, T.

    2006-01-01

    Work on expressive speech synthesis has long focused on the expression of basic emotions. In recent years, however, interest in other expressive styles has been increasing. The research presented in this paper aims at the generation of a storytelling speaking style, which is suitable for storytelling applications and more in general, for applications aimed at children. Based on an analysis of human storytellers' speech, we designed and implemented a set of prosodic rules for converting "neutr...

  15. CASRA+: A Colloquial Arabic Speech Recognition Application

    OpenAIRE

    Ramzi A. Haraty; Omar El Ariss

    2007-01-01

    The research proposed here was for an Arabic speech recognition application, concentrating on the Lebanese dialect. The system starts by sampling the speech, which was the process of transforming the sound from analog to digital and then extracts the features by using the Mel-Frequency Cepstral Coefficients (MFCC). The extracted features are then compared with the system's stored model; in this case the stored model chosen was a phoneme-based model. This reference model differs from the direc...

  16. THE USE OF EXPRESSIVE SPEECH ACTS IN HANNAH MONTANA SESSION 1

    Directory of Open Access Journals (Sweden)

    Nur Vita Handayani

    2015-07-01

    Full Text Available This study aims to describe kinds and forms of expressive speech act in Hannah Montana Session 1. It belongs to descriptive qualitative method. The research object was expressive speech act. The data source was utterances which contain expressive speech acts in the film Hannah Montana Session 1. The researcher used observation method and noting technique in collecting the data. In analyzing the data, descriptive qualitative method was used. The research findings show that there are ten kinds of expressive speech act found in Hannah Montana Session 1, namely expressing apology, expressing thanks, expressing sympathy, expressing attitudes, expressing greeting, expressing wishes, expressing joy, expressing pain, expressing likes, and expressing dislikes. The forms of expressive speech act are direct literal expressive speech act, direct non-literal expressive speech act, indirect literal expressive speech act, and indirect non-literal expressive speech act.

  17. 6 June 2008 - Chancellor F. Tomàs Vert, University of Valencia, visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    6 June 2008 - Chancellor F. Tomàs Vert, University of Valencia, visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni. Other participants: Prof. Francisco José Botella, Director, Instituto de Fisica Corpuscular, University of València and CSIC Prof. José Peñarrocha, Dean, Faculty of Physics Prof. Antonio Ferrer, Instituto de Fisica Corpuscular, University of València and CSIC Prof. Antonio Pich, University of València, Member of IFIC (CSIC - Univ. València), Coordinator of CPAN, Spanish National Centre for Particle, Astroparticle and Nuclear Physics.

  18. Status Report on Speech Research

    Science.gov (United States)

    1992-06-01

    invention of writing, is that readers and obscured the roots of native words. Similarly, the writers have so often happily accepted (once they Mycenaean ...fact that orthographic conventions some- Mycenaean Greek illustrate the risks of borrow- times mimic phonology: The conventions for ing, the English...protesting against the "monosyllabic myth," Ventris, M., & Chadwick, J. (1973). Documents in Mycenaean Greek has suggested that there actually were many

  19. Status Report on Speech Research.

    Science.gov (United States)

    1987-09-01

    closer to the vowel than do initial consonants. A speculative reason why final consonants may nestle closer to the vowel derives once again from...Report, SR-91.] Chaiyanara, P. M. (1983). Dialek Melayu Patani dan Bahasa Malaysia : Satu Kajian Perbandingan dari segi Fonologi, Morfologi dan Syntaksis

  20. Collective speech acts

    NARCIS (Netherlands)

    Meijers, A.W.M.; Tsohatzidis, S.L.

    2007-01-01

    From its early development in the 1960s, speech act theory always had an individualistic orientation. It focused exclusively on speech acts performed by individual agents. Paradigmatic examples are ‘I promise that p’, ‘I order that p’, and ‘I declare that p’. There is a single speaker and a single

  1. Free Speech Yearbook 1980.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The 11 articles in this collection deal with theoretical and practical freedom of speech issues. The topics covered are (1) the United States Supreme Court and communication theory; (2) truth, knowledge, and a democratic respect for diversity; (3) denial of freedom of speech in Jock Yablonski's campaign for the presidency of the United Mine…

  2. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  3. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  4. Robust digital processing of speech signals

    CERN Document Server

    Kovacevic, Branko; Veinović, Mladen; Marković, Milan

    2017-01-01

    This book focuses on speech signal phenomena, presenting a robustification of the usual speech generation models with regard to the presumed types of excitation signals, which is equivalent to the introduction of a class of nonlinear models and the corresponding criterion functions for parameter estimation. Compared to the general class of nonlinear models, such as various neural networks, these models possess good properties of controlled complexity, the option of working in “online” mode, as well as a low information volume for efficient speech encoding and transmission. Providing comprehensive insights, the book is based on the authors’ research, which has already been published, supplemented by additional texts discussing general considerations of speech modeling, linear predictive analysis and robust parameter estimation.

  5. Automated speech quality monitoring tool based on perceptual evaluation

    OpenAIRE

    Vozňák, Miroslav; Rozhon, Jan

    2010-01-01

    The paper deals with a speech quality monitoring tool which we have developed in accordance with PESQ (Perceptual Evaluation of Speech Quality) and is automatically running and calculating the MOS (Mean Opinion Score). Results are stored into database and used in a research project investigating how meteorological conditions influence the speech quality in a GSM network. The meteorological station, which is located in our university campus provides information about a temperature,...

  6. Individually-Personal Peculiarities of Younger Preschoolers’ Speech

    Directory of Open Access Journals (Sweden)

    M E Novikova

    2013-12-01

    Full Text Available Studying the speech of the younger preschoolers is a major factor in designing educational methods and preparing children for school. There exist individual and gender differences in the way children acquire speech skills. Word comprehension and idea interpretation depend on the child’s upbringing, his or her environment, the interaction within the family. This article submits the research data obtained from the study of the individual peculiarities of the younger preschool children’s speech.

  7. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  8. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  9. Eigennoise Speech Recovery in Adverse Environments with Joint Compensation of Additive and Convolutive Noise

    Directory of Open Access Journals (Sweden)

    Trung-Nghia Phung

    2015-01-01

    Full Text Available The learning-based speech recovery approach using statistical spectral conversion has been used for some kind of distorted speech as alaryngeal speech and body-conducted speech (or bone-conducted speech. This approach attempts to recover clean speech (undistorted speech from noisy speech (distorted speech by converting the statistical models of noisy speech into that of clean speech without the prior knowledge on characteristics and distributions of noise source. Presently, this approach has still not attracted many researchers to apply in general noisy speech enhancement because of some major problems: those are the difficulties of noise adaptation and the lack of noise robust synthesizable features in different noisy environments. In this paper, we adopted the methods of state-of-the-art voice conversions and speaker adaptation in speech recognition to the proposed speech recovery approach applied in different kinds of noisy environment, especially in adverse environments with joint compensation of additive and convolutive noises. We proposed to use the decorrelated wavelet packet coefficients as a low-dimensional robust synthesizable feature under noisy environments. We also proposed a noise adaptation for speech recovery with the eigennoise similar to the eigenvoice in voice conversion. The experimental results showed that the proposed approach highly outperformed traditional nonlearning-based approaches.

  10. Overview of the 2015 Workshop on Speech, Language and Audio in Multimedia

    NARCIS (Netherlands)

    Gravier, Guillaume; Jones, Gareth J.F.; Larson, Martha; Ordelman, Roeland J.F.

    2015-01-01

    The Workshop on Speech, Language and Audio in Multimedia (SLAM) positions itself at at the crossroad of multiple scientific fields - music and audio processing, speech processing, natural language processing and multimedia - to discuss and stimulate research results, projects, datasets and

  11. Engaged listeners: shared neural processing of powerful political speeches.

    Science.gov (United States)

    Schmälzle, Ralf; Häcker, Frank E K; Honey, Christopher J; Hasson, Uri

    2015-08-01

    Powerful speeches can captivate audiences, whereas weaker speeches fail to engage their listeners. What is happening in the brains of a captivated audience? Here, we assess audience-wide functional brain dynamics during listening to speeches of varying rhetorical quality. The speeches were given by German politicians and evaluated as rhetorically powerful or weak. Listening to each of the speeches induced similar neural response time courses, as measured by inter-subject correlation analysis, in widespread brain regions involved in spoken language processing. Crucially, alignment of the time course across listeners was stronger for rhetorically powerful speeches, especially for bilateral regions of the superior temporal gyri and medial prefrontal cortex. Thus, during powerful speeches, listeners as a group are more coupled to each other, suggesting that powerful speeches are more potent in taking control of the listeners' brain responses. Weaker speeches were processed more heterogeneously, although they still prompted substantially correlated responses. These patterns of coupled neural responses bear resemblance to metaphors of resonance, which are often invoked in discussions of speech impact, and contribute to the literature on auditory attention under natural circumstances. Overall, this approach opens up possibilities for research on the neural mechanisms mediating the reception of entertaining or persuasive messages. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Audiovisual speech perception development at varying levels of perceptual processing.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2016-04-01

    This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.

  13. Prevalence of Speech Disorders in Arak Primary School Students, 2014-2015

    Directory of Open Access Journals (Sweden)

    Abdoreza Yavari

    2016-09-01

    Full Text Available Abstract Background: The speech disorders may produce irreparable damage to childs speech and language development in the psychosocial view. The voice, speech sound production and fluency disorders are speech disorders, that may result from delay or impairment in speech motor control mechanism, central neuron system disorders, improper language stimulation or voice abuse. Materials and Methods: This study examined the prevalence of speech disorders in 1393 Arakian students at 1 to 6th grades of primary school. After collecting continuous speech samples, picture description, passage reading and phonetic test, we recorded the pathological signs of stuttering, articulation disorder and voice disorders in a special sheet. Results: The prevalence of articulation, voice and stuttering disorders was 8%, 3.5% and%1 and the prevalence of speech disorders was 11.9%. The prevalence of speech disorders was decreasing with increasing of student’s grade. 12.2% of boy students and 11.7% of girl students of primary school in Arak had speech disorders. Conclusion: The prevalence of speech disorders of primary school students in Arak is similar to the prevalence of speech disorders in Kermanshah, but the prevalence of speech disorders in this research is smaller than many similar researches in Iran. It seems that racial and cultural diversity has some effect on increasing the prevalence of speech disorders in Arak city.

  14. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  15. Markers of Deception in Italian Speech

    Directory of Open Access Journals (Sweden)

    Katelyn eSpence

    2012-10-01

    Full Text Available Lying is a universal activity and the detection of lying a universal concern. Presently, there is great interest in determining objective measures of deception. The examination of speech, in particular, holds promise in this regard; yet, most of what we know about the relationship between speech and lying is based on the assessment of English-speaking participants. Few studies have examined indicators of deception in languages other than English. The world’s languages differ in significant ways, and cross-linguistic studies of deceptive communications are a research imperative. Here we review some of these differences amongst the world’s languages, and provide an overview of a number of recent studies demonstrating that cross-linguistic research is a worthwhile endeavour. In addition, we report the results of an empirical investigation of pitch, response latency, and speech rate as cues to deception in Italian speech. True and false opinions were elicited in an audio-taped interview. A within subjects analysis revealed no significant difference between the average pitch of the two conditions; however, speech rate was significantly slower, while response latency was longer, during deception compared with truth-telling. We explore the implications of these findings and propose directions for future research, with the aim of expanding the cross-linguistic branch of research on markers of deception.

  16. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  17. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  18. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  19. Predicting Prosody from Text for Text-to-Speech Synthesis

    CERN Document Server

    Rao, K Sreenivasa

    2012-01-01

    Predicting Prosody from Text for Text-to-Speech Synthesis covers the specific aspects of prosody, mainly focusing on how to predict the prosodic information from linguistic text, and then how to exploit the predicted prosodic knowledge for various speech applications. Author K. Sreenivasa Rao discusses proposed methods along with state-of-the-art techniques for the acquisition and incorporation of prosodic knowledge for developing speech systems. Positional, contextual and phonological features are proposed for representing the linguistic and production constraints of the sound units present in the text. This book is intended for graduate students and researchers working in the area of speech processing.

  20. Use of Deixis in Donald Trump?s Campaign Speech

    OpenAIRE

    Hanim, Saidatul

    2017-01-01

    The aims of this study are (1) to find out the types of deixis in Donald Trump?s campaign speech, (2) to find out the reasons for the use of dominant type of deixis in Donald Trump?s campaign speech and (3) to find out whether or not the deixis is used appropriately in Donald Trump?s campaign speech. This research is conducted by using qualitative content analysis. The data of the study are the utterances from the script Donald Trump?s campaign speech. The data are analyzed by using Levinson ...

  1. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  2. Quadcopter Control Using Speech Recognition

    Science.gov (United States)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  3. SPEECH ACT OF ILTIFAT AND ITS INDONESIAN TRANSLATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Zaka Al Farisi

    2015-01-01

    Full Text Available Abstract: Iltifat (shifting speech act is distinctive and considered unique style of Arabic. It has potential errors when it is translated into Indonesian. Therefore, translation of iltifat speech act into another language can be an important issue. The objective of the study is to know translation procedures/techniques and ideology required in dealing with iltifat speech act. This research is directed at translation as a cognitive product of a translator. The data used in the present study were the corpus of Koranic verses that contain iltifat speech act along with their translation. Data analysis typically used descriptive-evaluative method with content analysis model. The data source of this research consisted of the Koran and its translation. The purposive sampling technique was employed, with the sample of the iltifat speech act contained in the Koran. The results showed that more than 60% of iltifat speech act were translated by using literal procedure. The significant number of literal translation of the verses asserts that the Ministry of Religious Affairs tended to use literal method of translation. In other words, the Koran translation made by the Ministry of Religious Affairs tended to be oriented to the source language in dealing with iltifat speech act. The number of the literal procedure used shows a tendency of foreignization ideology. Transitional pronouns contained in the iltifat speech act can be clearly translated when thick translations were used in the form of description in parentheses. In this case, explanation can be a choice in translating iltifat speech act.

  4. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    Science.gov (United States)

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  5. Charisma in business speeches

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter

    2016-01-01

    to business speeches. Consistent with the public opinion, our findings are indicative of Steve Jobs being a more charismatic speaker than Mark Zuckerberg. Beyond previous studies, our data suggest that rhythm and emphatic accentuation are also involved in conveying charisma. Furthermore, the differences...... between Steve Jobs and Mark Zuckerberg and the investor- and customer-related sections of their speeches support the modern understanding of charisma as a gradual, multiparametric, and context-sensitive concept....

  6. LIBERDADE DE EXPRESSÃO E DISCURSO DO ÓDIO NO BRASIL / FREE SPEECH AND HATE SPEECH IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Nevita Maria Pessoa de Aquino Franca Luna

    2014-12-01

    Full Text Available The purpose of this article is to analyze the restriction of free speech when it comes close to hate speech. In this perspective, the aim of this study is to answer the question: what is the understanding adopted by the Brazilian Supreme Court in cases involving the conflict between free speech and hate speech? The methodology combines a bibliographic review on the theoretical assumptions of the research (concept of free speech and hate speech, and understanding of the rights of defense of traditionally discriminated minorities and empirical research (documental and jurisprudential analysis of judged cases of American Court, German Court and Brazilian Court. Firstly, free speech is discussed, defining its meaning, content and purpose. Then, the hate speech is pointed as an inhibitor element of free speech for offending members of traditionally discriminated minorities, who are outnumbered or in a situation of cultural, socioeconomic or political subordination. Subsequently, are discussed some aspects of American (negative freedom and German models (positive freedom, to demonstrate that different cultures adopt different legal solutions. At the end, it is concluded that there is an approximation of the Brazilian understanding with the German doctrine, from the analysis of landmark cases as the publisher Siegfried Ellwanger (2003 and the Samba School Unidos do Viradouro (2008. The Brazilian comprehension, a multicultural country made up of different ethnicities, leads to a new process of defending minorities who, despite of involving the collision of fundamental rights (dignity, equality and freedom, is still restrained by incompatible barriers of a contemporary pluralistic democracy.

  7. Deep Complementary Bottleneck Features for Visual Speech Recognition

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Deep bottleneck features (DBNFs) have been used successfully in the past for acoustic speech recognition from audio. However, research on extracting DBNFs for visual speech recognition is very limited. In this work, we present an approach to extract deep bottleneck visual features based on deep

  8. Speech Pathology and Dialect Differences. Dialects and Educational Equity.

    Science.gov (United States)

    Wolfram, Walt

    Discussions in speech and language pathology often contain references to language differences and the ways these differences compare with speech and language disorders. There is ongoing research on the regional varieties of English, and within the past decade, information on social and ethnic variation in language has been accumulating. Based on…

  9. Speect: a multilingual text-to-speech system

    CSIR Research Space (South Africa)

    Louw, JA

    2008-11-01

    Full Text Available This paper introduces a new multilingual text-to-speech system, which we call Speect (Speech synthesis with extensible architecture), aiming to address the shortcomings of using Festival as a research sytem and Flite as a deployment system in a...

  10. The Role of the Right Hemisphere in Speech Act Comprehension

    Science.gov (United States)

    Holtgraves, Thomas

    2012-01-01

    In this research the role of the RH in the comprehension of speech acts (or illocutionary force) was examined. Two split-screen experiments were conducted in which participants made lexical decisions for lateralized targets after reading a brief conversation remark. On one-half of the trials the target word named the speech act performed with the…

  11. Segmentation, Diarization and Speech Transcription: Surprise Data Unraveled

    NARCIS (Netherlands)

    Huijbregts, M.A.H.

    2008-01-01

    In this thesis, research on large vocabulary continuous speech recognition for unknown audio conditions is presented. For automatic speech recognition systems based on statistical methods, it is important that the conditions of the audio used for training the statistical models match the conditions

  12. Private Speech Moderates the Effects of Effortful Control on Emotionality

    Science.gov (United States)

    Day, Kimberly L.; Smith, Cynthia L.; Neal, Amy; Dunsmore, Julie C.

    2018-01-01

    Research Findings: In addition to being a regulatory strategy, children's private speech may enhance or interfere with their effortful control used to regulate emotion. The goal of the current study was to investigate whether children's private speech during a selective attention task moderated the relations of their effortful control to their…

  13. Memory for speech and speech for memory.

    Science.gov (United States)

    Locke, J L; Kutz, K J

    1975-03-01

    Thirty kindergarteners, 15 who substituted /w/ for /r/ and 15 with correct articulation, received two perception tests and a memory test that included /w/ and /r/ in minimally contrastive syllables. Although both groups had nearly perfect perception of the experimenter's productions of /w/ and /r/, misarticulating subjects perceived their own tape-recorded w/r productions as /w/. In the memory task these same misarticulating subjects committed significantly more /w/-/r/ confusions in unspoken recall. The discussion considers why people subvocally rehearse; a developmental period in which children do not rehearse; ways subvocalization may aid recall, including motor and acoustic encoding; an echoic store that provides additional recall support if subjects rehearse vocally, and perception of self- and other- produced phonemes by misarticulating children-including its relevance to a motor theory of perception. Evidence is presented that speech for memory can be sufficiently impaired to cause memory disorder. Conceptions that restrict speech disorder to an impairment of communication are challenged.

  14. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  15. Internet images of the speech pathology profession.

    Science.gov (United States)

    Byrne, Nicole

    2017-06-05

    group of people into the profession in the future. What is known about the topic? To date, research has not considered the promotional profile of allied health professionals on the Internet. There has been a lack of consideration of whether the way in which the professions are promoted may affect clients accessing allied health services or people entering careers. What does this paper add? This paper raises awareness of the lack of promotion of a diverse workforce in speech pathology and considers how this may affect changing the professional demographics in the future. It also provides a starting point for documentation in the form of a baseline for tracking future changes. It allows consideration of the fact that when designing health promotional and educational materials, it is crucial that diversity is displayed in the professional role, the client role and the setting in order to provide information and education to the general public about the health services provided. What are the implications for practitioners? The presentation of narrow demographics of both the professional and client may potentially affect people considering speech pathology as a future career. The appearance of narrow client demographics and diagnosis groups may also deter people from accessing services. For example, if the demonstrated images do not show older people accessing speech pathology services, then this may suggest that services are only for children. The results from the present case example are transferrable to other health professions with similar professional demographic profiles (e.g. occupational therapy). Consideration of the need to display a diverse client profile is relevant to all health and medical services, and demonstrates steps towards inclusiveness and increasing engagement of clients who may be currently less likely to access health services (including people who are Aboriginal or from a culturally and linguistically diverse background).

  16. Effects of Feedback Frequency and Timing on Acquisition, Retention, and Transfer of Speech Skills in Acquired Apraxia of Speech

    Science.gov (United States)

    Hula, Shannon N. Austermann; Robin, Donald A.; Maas, Edwin; Ballard, Kirrie J.; Schmidt, Richard A.

    2008-01-01

    Purpose: Two studies examined speech skill learning in persons with apraxia of speech (AOS). Motor-learning research shows that delaying or reducing the frequency of feedback promotes retention and transfer of skills. By contrast, immediate or frequent feedback promotes temporary performance enhancement but interferes with retention and transfer.…

  17. Immediate attention for public speech: Differential effects of rhetorical schemes and valence framing in political radio speeches

    NARCIS (Netherlands)

    Lagerwerf, L.; Boeynaems, A.; van Egmond-Brussee, C.; Burgers, C.F.

    2015-01-01

    Political campaign speeches are deemed influential in winning people’s minds and votes. While the language used in such speeches has often been credited with their impact, empirical research in this area is scarce. We report on two experiments investigating how language variables such as rhetorical

  18. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: I. Development and Description of the Pause Marker

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article (PM I) is to describe the rationale for and development of the Pause Marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech from speech delay. Method: The authors describe and prioritize 7 criteria with which to evaluate the research and clinical…

  19. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  20. Methods and models for quantative assessment of speech intelligibility in cross-language communication

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Steeneken, H.J.M.; Houtgast, T.

    2001-01-01

    To deal with the effects of nonnative speech communication on speech intelligibility, one must know the magnitude of these effects. To measure this magnitude, suitable test methods must be available. Many of the methods used in cross-language speech communication research are not very suitable for

  1. Adapting to foreign-accented speech: The role of delay in testing

    NARCIS (Netherlands)

    Witteman, M.J.; Bardhan, N.P.; Weber, A.C.; McQueen, J.M.

    2011-01-01

    Understanding speech usually seems easy, but it can become noticeably harder when the speaker has a foreign accent. This is because foreign accents add considerable variation to speech. Research on foreign-accented speech shows that participants are able to adapt quickly to this type of variation.

  2. A Survey of Speech Education in United States Two-Year Colleges.

    Science.gov (United States)

    Planck, Carolyn Roberts

    The status of speech education in all United States two-year colleges is discussed. Both public and private schools are examined. Two separate studies were conducted, each utilizing the same procedure. The specific aspects with which the research was concerned were: (1) availability of speech courses, (2) departmentalization of speech courses, (3)…

  3. Writing and Speech Recognition : Observing Error Correction Strategies of Professional Writers

    NARCIS (Netherlands)

    Leijten, M.A.J.C.

    2007-01-01

    In this thesis we describe the organization of speech recognition based writing processes. Writing can be seen as a visual representation of spoken language: a combination that speech recognition takes full advantage of. In the field of writing research, speech recognition is a new writing

  4. Treatment of Children with Speech Oral Placement Disorders (OPDs): A Paradigm Emerges

    Science.gov (United States)

    Bahr, Diane; Rosenfeld-Johnson, Sara

    2010-01-01

    Epidemiological research was used to develop the Speech Disorders Classification System (SDCS). The SDCS is an important speech diagnostic paradigm in the field of speech-language pathology. This paradigm could be expanded and refined to also address treatment while meeting the standards of evidence-based practice. The article assists that process…

  5. ACOUSTIC SPEECH RECOGNITION FOR MARATHI LANGUAGE USING SPHINX

    Directory of Open Access Journals (Sweden)

    Aman Ankit

    2016-09-01

    Full Text Available Speech recognition or speech to text processing, is a process of recognizing human speech by the computer and converting into text. In speech recognition, transcripts are created by taking recordings of speech as audio and their text transcriptions. Speech based applications which include Natural Language Processing (NLP techniques are popular and an active area of research. Input to such applications is in natural language and output is obtained in natural language. Speech recognition mostly revolves around three approaches namely Acoustic phonetic approach, Pattern recognition approach and Artificial intelligence approach. Creation of acoustic model requires a large database of speech and training algorithms. The output of an ASR system is recognition and translation of spoken language into text by computers and computerized devices. ASR today finds enormous application in tasks that require human machine interfaces like, voice dialing, and etc. Our key contribution in this paper is to create corpora for Marathi language and explore the use of Sphinx engine for automatic speech recognition

  6. Ultrasound applicability in Speech Language Pathology and Audiology.

    Science.gov (United States)

    Barberena, Luciana da Silva; Brasil, Brunah de Castro; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli; Keske-Soares, Márcia

    2014-01-01

    To present recent studies that used the ultrasound in the fields of Speech Language Pathology and Audiology, which evidence possibilities of the applicability of this technique in different subareas. A bibliographic research was carried out in the PubMed database, using the keywords "ultrasonic," "speech," "phonetics," "Speech, Language and Hearing Sciences," "voice," "deglutition," and "myofunctional therapy," comprising some areas of Speech Language Pathology and Audiology Sciences. The keywords "ultrasound," "ultrasonography," "swallow," "orofacial myofunctional therapy," and "orofacial myology" were also used in the search. Studies in humans from the past 5 years were selected. In the preselection, duplicated studies, articles not fully available, and those that did not present direct relation between ultrasound and Speech Language Pathology and Audiology Sciences were discarded. The data were analyzed descriptively and classified subareas of Speech Language Pathology and Audiology Sciences. The following items were considered: purposes, participants, procedures, and results. We selected 12 articles for ultrasound versus speech/phonetics subarea, 5 for ultrasound versus voice, 1 for ultrasound versus muscles of mastication, and 10 for ultrasound versus swallow. Studies relating "ultrasound" and "Speech Language Pathology and Audiology Sciences" in the past 5 years were not found. Different studies on the use of ultrasound in Speech Language Pathology and Audiology Sciences were found. Each of them, according to its purpose, confirms new possibilities of the use of this instrument in the several subareas, aiming at a more accurate diagnosis and new evaluative and therapeutic possibilities.

  7. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  8. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  9. The politeness prosody of the Javanese directive speech

    Directory of Open Access Journals (Sweden)

    F.X. Rahyono

    2009-10-01

    Full Text Available This experimental phonetic research deals with the prosodies of directive speech in Javanese. The research procedures were: (1 speech production, (2 acoustic analysis, and (3 perception test. The data investigated are three directive utterances, in the form of statements, commands, and questions. The data were obtained by recording dialogues that present polite as well as impolite speech. Three acoustic experiments were conducted for statements, commands, and questions in directive speech: (1 modifications of duration, (2 modifications of contour, and (3 modifications of fundamental frequency. The result of the subsequent perception tests to 90 stimuli with 24 subjects were analysed statistically with ANOVA (Analysis of Variant. Based on this statistic analysis, the prosodic characteristics of polite and impolite speech were identified.

  10. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  11. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  12. Emotion, affect and personality in speech the bias of language and paralanguage

    CERN Document Server

    Johar, Swati

    2016-01-01

    This book explores the various categories of speech variation and works to draw a line between linguistic and paralinguistic phenomenon of speech. Paralinguistic contrast is crucial to human speech but has proven to be one of the most difficult tasks in speech systems. In the quest for solutions to speech technology and sciences, this book narrows down the gap between speech technologists and phoneticians and emphasizes the imperative efforts required to accomplish the goal of paralinguistic control in speech technology applications and the acute need for a multidisciplinary categorization system. This interdisciplinary work on paralanguage will not only serve as a source of information but also a theoretical model for linguists, sociologists, psychologists, phoneticians and speech researchers.

  13. Speech Emotion Recognition Based on Power Normalized Cepstral Coefficients in Noisy Conditions

    Directory of Open Access Journals (Sweden)

    M. Bashirpour

    2016-09-01

    Full Text Available Automatic recognition of speech emotional states in noisy conditions has become an important research topic in the emotional speech recognition area, in recent years. This paper considers the recognition of emotional states via speech in real environments. For this task, we employ the power normalized cepstral coefficients (PNCC in a speech emotion recognition system. We investigate its performance in emotion recognition using clean and noisy speech materials and compare it with the performances of the well-known MFCC, LPCC, RASTA-PLP, and also TEMFCC features. Speech samples are extracted from the Berlin emotional speech database (Emo DB and Persian emotional speech database (Persian ESD which are corrupted with 4 different noise types under various SNR levels. The experiments are conducted in clean train/noisy test scenarios to simulate practical conditions with noise sources. Simulation results show that higher recognition rates are achieved for PNCC as compared with the conventional features under noisy conditions.

  14. Generating Expressive Speech for Storytelling Applications

    NARCIS (Netherlands)

    Bailly, G.; Theune, Mariet; Meijs, Koen; Campbell, N.; Hamza, W.; Heylen, Dirk K.J.; Ordelman, Roeland J.F.; Hoge, H.; Jianhua, T.

    Work on expressive speech synthesis has long focused on the expression of basic emotions. In recent years, however, interest in other expressive styles has been increasing. The research presented in this paper aims at the generation of a storytelling speaking style, which is suitable for

  15. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    In this paper, we describe recent work at Idiap Research Institute in the domain of multilingual speech processing and provide some insights into emerging ... and industry for technologies to help break down domestic and international language barriers, these also being barriers to the expansion of policy and commerce.

  16. Temperament, Speech and Language: An Overview

    Science.gov (United States)

    Conture, Edward G.; Kelly, Ellen M.; Walden, Tedra A.

    2013-01-01

    The purpose of this article is to discuss definitional and measurement issues as well as empirical evidence regarding temperament, especially with regard to children's (a)typical speech and language development. Although all ages are considered, there is a predominant focus on children. Evidence from considerable empirical research lends support…

  17. The comprehension of gesture and speech

    NARCIS (Netherlands)

    Willems, R.M.; Özyürek, A.; Hagoort, P.

    2005-01-01

    Although generally studied in isolation, action observation and speech comprehension go hand in hand during everyday human communication. That is, people gesture while they speak. From previous research it is known that a tight link exists between spoken language and such hand gestures. This study

  18. Audiovisual Discrimination between Laughter and Speech

    NARCIS (Netherlands)

    Petridis, Stavros; Pantic, Maja

    Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video leads to an improved reliability of audiovisual approach in

  19. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  20. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  1. Innovative Speech Reconstructive Surgery

    OpenAIRE

    Hashem Shemshadi

    2003-01-01

    Proper speech functioning in human being, depends on the precise coordination and timing balances in a series of complex neuro nuscular movements and actions. Starting from the prime organ of energy source of expelled air from respirato y system; deliver such air to trigger vocal cords; swift changes of this phonatory episode to a comprehensible sound in RESONACE and final coordination of all head and neck structures to elicit final speech in ...

  2. Researcher Story: Stuttering

    Medline Plus

    Full Text Available ... clinical research trial. Participation involves providing a small blood sample and a recorded speech sample. For more information ... Its pretty simple really. We need a small blood sample from each participant and a recorded speech sample. ...

  3. Speech, Language, and Reading in 10-Year-Olds With Cleft: Associations With Teasing, Satisfaction With Speech, and Psychological Adjustment.

    Science.gov (United States)

    Feragen, Kristin Billaud; Særvold, Tone Kristin; Aukner, Ragnhild; Stock, Nicola Marie

    2017-03-01

      Despite the use of multidisciplinary services, little research has addressed issues involved in the care of those with cleft lip and/or palate across disciplines. The aim was to investigate associations between speech, language, reading, and reports of teasing, subjective satisfaction with speech, and psychological adjustment.   Cross-sectional data collected during routine, multidisciplinary assessments in a centralized treatment setting, including speech and language therapists and clinical psychologists.   Children with cleft with palatal involvement aged 10 years from three birth cohorts (N = 170) and their parents.   Speech: SVANTE-N. Language: Language 6-16 (sentence recall, serial recall, vocabulary, and phonological awareness). Reading: Word Chain Test and Reading Comprehension Test. Psychological measures: Strengths and Difficulties Questionnaire and extracts from the Satisfaction With Appearance Scale and Child Experience Questionnaire.   Reading skills were associated with self- and parent-reported psychological adjustment in the child. Subjective satisfaction with speech was associated with psychological adjustment, while not being consistently associated with speech therapists' assessments. Parent-reported teasing was found to be associated with lower levels of reading skills. Having a medical and/or psychological condition in addition to the cleft was found to affect speech, language, and reading significantly.   Cleft teams need to be aware of speech, language, and/or reading problems as potential indicators of psychological risk in children with cleft. This study highlights the importance of multiple reports (self, parent, and specialist) and a multidisciplinary approach to cleft care and research.

  4. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  5. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech. The ....... The artwork is presented at the Re:New festival in May 2008....

  6. Improving the speech intelligibility in classrooms

    Science.gov (United States)

    Lam, Choi Ling Coriolanus

    One of the major acoustical concerns in classrooms is the establishment of effective verbal communication between teachers and students. Non-optimal acoustical conditions, resulting in reduced verbal communication, can cause two main problems. First, they can lead to reduce learning efficiency. Second, they can also cause fatigue, stress, vocal strain and health problems, such as headaches and sore throats, among teachers who are forced to compensate for poor acoustical conditions by raising their voices. Besides, inadequate acoustical conditions can induce the usage of public address system. Improper usage of such amplifiers or loudspeakers can lead to impairment of students' hearing systems. The social costs of poor classroom acoustics will be large to impair the learning of children. This invisible problem has far reaching implications for learning, but is easily solved. Many researches have been carried out that they have accurately and concisely summarized the research findings on classrooms acoustics. Though, there is still a number of challenging questions remaining unanswered. Most objective indices for speech intelligibility are essentially based on studies of western languages. Even several studies of tonal languages as Mandarin have been conducted, there is much less on Cantonese. In this research, measurements have been done in unoccupied rooms to investigate the acoustical parameters and characteristics of the classrooms. The speech intelligibility tests, which based on English, Mandarin and Cantonese, and the survey were carried out on students aged from 5 years old to 22 years old. It aims to investigate the differences in intelligibility between English, Mandarin and Cantonese of the classrooms in Hong Kong. The significance on speech transmission index (STI) related to Phonetically Balanced (PB) word scores will further be developed. Together with developed empirical relationship between the speech intelligibility in classrooms with the variations

  7. How may the basal ganglia contribute to auditory categorization and speech perception?

    Directory of Open Access Journals (Sweden)

    Sung-Joo eLim

    2014-08-01

    Full Text Available Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.

  8. Tools for the assessment of childhood apraxia of speech.

    Science.gov (United States)

    Gubiani, Marileda Barichello; Pagliarin, Karina Carlesso; Keske-Soares, Marcia

    2015-01-01

    This study systematically reviews the literature on the main tools used to evaluate childhood apraxia of speech (CAS). The search strategy includes Scopus, PubMed, and Embase databases. Empirical studies that used tools for assessing CAS were selected. Articles were selected by two independent researchers. The search retrieved 695 articles, out of which 12 were included in the study. Five tools were identified: Verbal Motor Production Assessment for Children, Dynamic Evaluation of Motor Speech Skill, The Orofacial Praxis Test, Kaufman Speech Praxis Test for Children, and Madison Speech Assessment Protocol. There are few instruments available for CAS assessment and most of them are intended to assess praxis and/or orofacial movements, sequences of orofacial movements, articulation of syllables and phonemes, spontaneous speech, and prosody. There are some tests for assessment and diagnosis of CAS. However, few studies on this topic have been conducted at the national level, as well as protocols to assess and assist in an accurate diagnosis.

  9. The Hierarchical Cortical Organization of Human Speech Processing.

    Science.gov (United States)

    de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L; Gallant, Jack L; Theunissen, Frédéric E

    2017-07-05

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

  10. A System for Detecting Miscues in Dyslexic Read Speech

    DEFF Research Database (Denmark)

    Rasmussen, Morten Højfeldt; Tan, Zheng-Hua; Lindberg, Børge

    2009-01-01

    While miscue detection in general is a well explored research field little attention has so far been paid to miscue detection in dyslexic read speech. This domain differs substantially from the domains that are commonly researched, as for example dyslexic read speech includes frequent regressions...... that the system detects miscues at a false alarm rate of 5.3% and a miscue detection rate of 40.1%. These results are worse than current state of the art reading tutors perhaps indicating that dyslexic read speech is a challenge to handle....

  11. Inconsistency of speech in children with childhood apraxia of speech, phonological disorders, and typical speech

    Science.gov (United States)

    Iuzzini, Jenya

    There is a lack of agreement on the features used to differentiate Childhood Apraxia of Speech (CAS) from Phonological Disorders (PD). One criterion which has gained consensus is lexical inconsistency of speech (ASHA, 2007); however, no accepted measure of this feature has been defined. Although lexical assessment provides information about consistency of an item across repeated trials, it may not capture the magnitude of inconsistency within an item. In contrast, segmental analysis provides more extensive information about consistency of phoneme usage across multiple contexts and word-positions. The current research compared segmental and lexical inconsistency metrics in preschool-aged children with PD, CAS, and typical development (TD) to determine how inconsistency varies with age in typical and disordered speakers, and whether CAS and PD were differentiated equally well by both assessment levels. Whereas lexical and segmental analyses may be influenced by listener characteristics or speaker intelligibility, the acoustic signal is less vulnerable to these factors. In addition, the acoustic signal may reveal information which is not evident in the perceptual signal. A second focus of the current research was motivated by Blumstein et al.'s (1980) classic study on voice onset time (VOT) in adults with acquired apraxia of speech (AOS) which demonstrated a motor impairment underlying AOS. In the current study, VOT analyses were conducted to determine the relationship between age and group with the voicing distribution for bilabial and alveolar plosives. Findings revealed that 3-year-olds evidenced significantly higher inconsistency than 5-year-olds; segmental inconsistency approached 0% in 5-year-olds with TD, whereas it persisted in children with PD and CAS suggesting that for child in this age-range, inconsistency is a feature of speech disorder rather than typical development (Holm et al., 2007). Likewise, whereas segmental and lexical inconsistency were

  12. Workshop: Welcoming speech

    International Nuclear Information System (INIS)

    Lummerzheim, D.

    1994-01-01

    The welcoming speech underlines the fact that any validation process starting with calculation methods and ending with studies on the long-term behaviour of a repository system can only be effected through laboratory, field and natural-analogue studies. The use of natural analogues (NA) is to secure the biosphere and to verify whether this safety really exists. (HP) [de

  13. Hearing speech in music.

    Science.gov (United States)

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (Ptempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (Pmusic offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  14. Hearing speech in music

    Directory of Open Access Journals (Sweden)

    Seth-Reino Ekström

    2011-01-01

    Full Text Available The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA noise and speech spectrum-filtered noise (SPN]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA. The results showed a significant effect of piano performance speed and octave (P<.01. Low octave and fast tempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (P<.01 and SPN (P<.05. Subjects with hearing loss had higher masked thresholds than the normal-hearing subjects (P<.01, but there were smaller differences between masking conditions (P<.01. It is pointed out that music offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  15. Free Speech Yearbook 1979.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The seven articles in this collection deal with theoretical and practical freedom of speech issues. Topics covered are: the United States Supreme Court, motion picture censorship, and the color line; judicial decision making; the established scientific community's suppression of the ideas of Immanuel Velikovsky; the problems of avant-garde jazz,…

  16. End-to-end visual speech recognition with LSTMS

    NARCIS (Netherlands)

    Petridis, Stavros; Li, Zuwei; Pantic, Maja

    2017-01-01

    Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on

  17. How does cognitive load influence speech perception? An encoding hypothesis.

    Science.gov (United States)

    Mitterer, Holger; Mattys, Sven L

    2017-01-01

    Two experiments investigated the conditions under which cognitive load exerts an effect on the acuity of speech perception. These experiments extend earlier research by using a different speech perception task (four-interval oddity task) and by implementing cognitive load through a task often thought to be modular, namely, face processing. In the cognitive-load conditions, participants were required to remember two faces presented before the speech stimuli. In Experiment 1, performance in the speech-perception task under cognitive load was not impaired in comparison to a no-load baseline condition. In Experiment 2, we modified the load condition minimally such that it required encoding of the two faces simultaneously with the speech stimuli. As a reference condition, we also used a visual search task that in earlier experiments had led to poorer speech perception. Both concurrent tasks led to decrements in the speech task. The results suggest that speech perception is affected even by loads thought to be processed modularly, and that, critically, encoding in working memory might be the locus of interference.

  18. Free Speech, Hate Speech, and Hate Beards : Language ideologies of Dutch populism

    NARCIS (Netherlands)

    Leezenberg, M.; Silva, D.

    2017-01-01

    This paper explores the discourse and verbal strategies of the Dutch ‘Freedom Party’ (PVV), an islamophobic populist party that emerged in the first decade of the twenty-first century. In particular, it focuses on the linguistic ideologies implicit in PVV discourse, arguing that PVV spokespersons

  19. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  20. Talker Variability in Audiovisual Speech Perception

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-07-01

    Full Text Available A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition. So far, this talker-variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target-word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  1. Capitalising on North American speech resources for the development of a South African English large vocabulary speech recognition system

    CSIR Research Space (South Africa)

    Kamper, H

    2014-11-01

    Full Text Available -West University, Vanderbijlpark, South Africa 2Human Language Technologies Research Group, Meraka Institute, CSIR, Pretoria, South Africa {etienne.barnard, marelie.davel, cvheerden}@gmail.com, {fdwet, jbadenhorst}@csir.co.za Abstract The NCHLT speech...

  2. Speech-based Class Attendance

    Science.gov (United States)

    Faizel Amri, Umar; Nur Wahidah Nik Hashim, Nik; Hazrin Hany Mohamad Hanif, Noor

    2017-11-01

    In the department of engineering, students are required to fulfil at least 80 percent of class attendance. Conventional method requires student to sign his/her initial on the attendance sheet. However, this method is prone to cheating by having another student signing for their fellow classmate that is absent. We develop our hypothesis according to a verse in the Holy Qur’an (95:4), “We have created men in the best of mould”. Based on the verse, we believe each psychological characteristic of human being is unique and thus, their speech characteristic should be unique. In this paper we present the development of speech biometric-based attendance system. The system requires user’s voice to be installed in the system as trained data and it is saved in the system for registration of the user. The following voice of the user will be the test data in order to verify with the trained data stored in the system. The system uses PSD (Power Spectral Density) and Transition Parameter as the method for feature extraction of the voices. Euclidean and Mahalanobis distances are used in order to verified the user’s voice. For this research, ten subjects of five females and five males were chosen to be tested for the performance of the system. The system performance in term of recognition rate is found to be 60% correct identification of individuals.

  3. Blind speech separation system for humanoid robot with FastICA for audio filtering and separation

    Science.gov (United States)

    Budiharto, Widodo; Santoso Gunawan, Alexander Agung

    2016-07-01

    Nowadays, there are many developments in building intelligent humanoid robot, mainly in order to handle voice and image. In this research, we propose blind speech separation system using FastICA for audio filtering and separation that can be used in education or entertainment. Our main problem is to separate the multi speech sources and also to filter irrelevant noises. After speech separation step, the results will be integrated with our previous speech and face recognition system which is based on Bioloid GP robot and Raspberry Pi 2 as controller. The experimental results show the accuracy of our blind speech separation system is about 88% in command and query recognition cases.

  4. Using others' words: conversational use of reported speech by individuals with aphasia and their communication partners.

    Science.gov (United States)

    Hengst, Julie A; Frame, Simone R; Neuman-Stritzel, Tiffany; Gannaway, Rachel

    2005-02-01

    Reported speech, wherein one quotes or paraphrases the speech of another, has been studied extensively as a set of linguistic and discourse practices. Researchers agree that reported speech is pervasive, found across languages, and used in diverse contexts. However, to date, there have been no studies of the use of reported speech among individuals with aphasia. Grounded in an interactional sociolinguistic perspective, the study presented here documents and analyzes the use of reported speech by 7 adults with mild to moderately severe aphasia and their routine communication partners. Each of the 7 pairs was videotaped in 4 everyday activities at home or around the community, yielding over 27 hr of conversational interaction for analysis. A coding scheme was developed that identified 5 types of explicitly marked reported speech: direct, indirect, projected, indexed, and undecided. Analysis of the data documented reported speech as a common discourse practice used successfully by the individuals with aphasia and their communication partners. All participants produced reported speech at least once, and across all observations the target pairs produced 400 reported speech episodes (RSEs), 149 by individuals with aphasia and 251 by their communication partners. For all participants, direct and indirect forms were the most prevalent (70% of RSEs). Situated discourse analysis of specific episodes of reported speech used by 3 of the pairs provides detailed portraits of the diverse interactional, referential, social, and discourse functions of reported speech and explores ways that the pairs used reported speech to successfully frame talk despite their ongoing management of aphasia.

  5. ERC supports antihydrogen research

    CERN Multimedia

    Katarina Anthony

    2013-01-01

    As part of a Europe-wide effort to promote high-level research, the European Research Council (ERC) has awarded a €2.14 million grant to ALPHA spokesperson Jeffrey Hangst, which will further the collaboration’s study of the antihydrogen spectrum. The grant will be used to purchase laser spectroscopy equipment for the new ALPHA-2 set-up.   ALPHA Spokesperson, Jeffrey Hangst, in front of the new ALPHA-2 set-up. The incorporation of lasers into ALPHA-2 will allow the team to take precise measurements of trapped antihydrogen. Among the new equipment financed by the grant will be a high-precision laser and stabilisation system to study the transition from the ground state to the first excited state in antihydrogen. As this spectral line is very well known in hydrogen, its study in antihydrogen will provide essential data for matter/antimatter symmetry investigations. “The grant has come at a perfect time for us,” says Jeffrey Hangst. “We wil...

  6. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  7. Resourcing speech-language pathologists to work with multilingual children.

    Science.gov (United States)

    McLeod, Sharynne

    2014-06-01

    Speech-language pathologists play important roles in supporting people to be competent communicators in the languages of their communities. However, with over 7000 languages spoken throughout the world and the majority of the global population being multilingual, there is often a mismatch between the languages spoken by children and families and their speech-language pathologists. This paper provides insights into service provision for multilingual children within an English-dominant country by viewing Australia's multilingual population as a microcosm of ethnolinguistic minorities. Recent population studies of Australian pre-school children show that their most common languages other than English are: Arabic, Cantonese, Vietnamese, Italian, Mandarin, Spanish, and Greek. Although 20.2% of services by Speech Pathology Australia members are offered in languages other than English, there is a mismatch between the language of the services and the languages of children within similar geographical communities. Australian speech-language pathologists typically use informal or English-based assessments and intervention tools with multilingual children. Thus, there is a need for accessible culturally and linguistically appropriate resources for working with multilingual children. Recent international collaborations have resulted in practical strategies to support speech-language pathologists during assessment, intervention, and collaboration with families, communities, and other professionals. The International Expert Panel on Multilingual Children's Speech was assembled to prepare a position paper to address issues faced by speech-language pathologists when working with multilingual populations. The Multilingual Children's Speech website ( http://www.csu.edu.au/research/multilingual-speech ) addresses one of the aims of the position paper by providing free resources and information for speech-language pathologists about more than 45 languages. These international

  8. Cognitive control components and speech symptoms in people with schizophrenia.

    Science.gov (United States)

    Becker, Theresa M; Cicero, David C; Cowan, Nelson; Kerns, John G

    2012-03-30

    Previous schizophrenia research suggests poor cognitive control is associated with schizophrenia speech symptoms. However, cognitive control is a broad construct. Two important cognitive control components are poor goal maintenance and poor verbal working memory storage. In the current research, people with schizophrenia (n=45) performed three cognitive tasks that varied in their goal maintenance and verbal working memory storage demands. Speech symptoms were assessed using clinical rating scales, ratings of disorganized speech from typed transcripts, and self-reported disorganization. Overall, alogia was associated with both goal maintenance and verbal working memory tasks. Objectively rated disorganized speech was associated with poor goal maintenance and with a task that included both goal maintenance and verbal working memory storage demands. In contrast, self-reported disorganization was unrelated to either amount of objectively rated disorganized speech or to cognitive control task performance, instead being associated with negative mood symptoms. Overall, our results suggest that alogia is associated with both poor goal maintenance and poor verbal working memory storage and that disorganized speech is associated with poor goal maintenance. In addition, patients' own assessment of their disorganization is related to negative mood, but perhaps not to objective disorganized speech or to cognitive control task performance. Published by Elsevier Ireland Ltd.

  9. The Apraxia of Speech Rating Scale: a tool for diagnosis and description of apraxia of speech.

    Science.gov (United States)

    Strand, Edythe A; Duffy, Joseph R; Clark, Heather M; Josephs, Keith

    2014-01-01

    The purpose of this report is to describe an initial version of the Apraxia of Speech Rating Scale (ASRS), a scale designed to quantify the presence or absence, relative frequency, and severity of characteristics frequently associated with apraxia of speech (AOS). In this paper we report intra-judge and inter-judge reliability, as well as indices of validity, for the ASRS which was completed for 133 adult participants with a neurodegenerative speech or language disorder, 56 of whom had AOS. The overall inter-judge ICC among three clinicians was 0.94 for the total ASRS score and 0.91 for the number of AOS characteristics identified as present. Intra-judge ICC measures were high, ranging from 0.91 to 0.98. Validity was demonstrated on the basis of strong correlations with independent clinical diagnosis, as well as strong correlations of ASRS scores with independent clinical judgments of AOS severity. Results suggest that the ASRS is a potentially useful tool for documenting the presence and severity of characteristics of AOS. At this point in its development it has good potential for broader clinical use and for better subject description in AOS research. The Apraxia of Speech Rating Scale: A new tool for diagnosis and description of apraxia of speech 1. The reader will be able to explain characteristics of apraxia of speech. 2. The reader will be able to demonstrate use of a rating scale to document the presence and severity of speech characteristics. 3. The reader will be able to explain the reliability and validity of the ASRS. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  11. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  12. THE DIRECTIVE SPEECH ACTS USED IN ENGLISH SPEAKING CLASS

    Directory of Open Access Journals (Sweden)

    Muhammad Khatib Bayanuddin

    2016-12-01

    Full Text Available This research discusses about an analysis of the directive speech acts used in english speaking class at the third semester of english speaking class of english study program of IAIN STS Jambi. The aims of this research are to describe the types of directive speech acts and politeness strategies that found in English speaking class. This research used descriptive qualitative method. This method used to describe clearly about the types and politeness strategies of directive speech acts based on the data in English speaking class. The result showed that in English speaking class that there are some types and politeness strategies of directive speech acts, such as: requestives, questions, requirements, prohibitives, permissives, and advisores as types, as well as on-record indirect strategies (prediction statement, strong obligation statement, possibility statement, weaker obligation statement, volitional statement, direct strategies (imperative, performative, and nonsentential strategies as politeness strategies. The achievement of this research are hoped can be additional knowledge about linguistics study, especially in directive speech acts and can be developed for future researches. Key words: directive speech acts, types, politeness strategies.

  13. From Persuasive to Authoritative Speech Genres

    DEFF Research Database (Denmark)

    Nørreklit, Hanne; Scapens, Robert

    2014-01-01

    Purpose: The purpose of this paper is to contrast the speech genres in the original and the published versions of an article written by academic researchers and published in the US practitioner-oriented journal, Strategic Finance. The original version, submitted by the researchers, war rewritten...... a world". Research limitations/implications: The choice of language and argumentation should be given careful attention when the authors craft the accounting frameworks and research papers, and especially when the authors seek to communicate the findings of the research to practitioners. However...

  14. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  15. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  16. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  17. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  18. Treating speech subsystems in childhood apraxia of speech with tactual input: the PROMPT approach.

    Science.gov (United States)

    Dale, Philip S; Hayden, Deborah A

    2013-11-01

    Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT; Hayden, 2004; Hayden, Eigen, Walker, & Olsen, 2010)-a treatment approach for the improvement of speech sound disorders in children-uses tactile-kinesthetic- proprioceptive (TKP) cues to support and shape movements of the oral articulators. No research to date has systematically examined the efficacy of PROMPT for children with childhood apraxia of speech (CAS). Four children (ages 3;6 [years;months] to 4;8), all meeting the American Speech-Language-Hearing Association (2007) criteria for CAS, were treated using PROMPT. All children received 8 weeks of 2 × per week treatment, including at least 4 weeks of full PROMPT treatment that included TKP cues. During the first 4 weeks, 2 of the 4 children received treatment that included all PROMPT components except TKP cues. This design permitted both between-subjects and within-subjects comparisons to evaluate the effect of TKP cues. Gains in treatment were measured by standardized tests and by criterion-referenced measures based on the production of untreated probe words, reflecting change in speech movements and auditory perceptual accuracy. All 4 children made significant gains during treatment, but measures of motor speech control and untreated word probes provided evidence for more gain when TKP cues were included. PROMPT as a whole appears to be effective for treating children with CAS, and the inclusion of TKP cues appears to facilitate greater effect.

  19. Speech and neurology-chemical impairment correlates

    Science.gov (United States)

    Hayre, Harb S.

    2002-05-01

    Speech correlates of alcohol/drug impairment and its neurological basis is presented with suggestion for further research in impairment from poly drug/medicine/inhalent/chew use/abuse, and prediagnosis of many neuro- and endocrin-related disorders. Nerve cells all over the body detect chemical entry by smoking, injection, drinking, chewing, or skin absorption, and transmit neurosignals to their corresponding cerebral subsystems, which in turn affect speech centers-Broca's and Wernick's area, and motor cortex. For instance, gustatory cells in the mouth, cranial and spinal nerve cells in the skin, and cilia/olfactory neurons in the nose are the intake sensing nerve cells. Alcohol depression, and brain cell damage were detected from telephone speech using IMPAIRLYZER-TM, and the results of these studies were presented at 1996 ASA meeting in Indianapolis, and 2001 German Acoustical Society-DEGA conference in Hamburg, Germany respectively. Speech based chemical Impairment measure results were presented at the 2001 meeting of ASA in Chicago. New data on neurotolerance based chemical impairment for alcohol, drugs, and medicine shall be presented, and shown not to fully support NIDA-SAMSHA drug and alcohol threshold used in drug testing domain.

  20. Long-term temporal tracking of speech rate affects spoken-word recognition.

    Science.gov (United States)

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  1. Audiovisual speech facilitates voice learning.

    Science.gov (United States)

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  2. An Investigation of effective factors on nurses\\' speech errors

    Directory of Open Access Journals (Sweden)

    Maryam Tafaroji yeganeh

    2017-03-01

    Full Text Available Background : Speech errors are a branch of psycholinguistic science. Speech error or slip of tongue is a natural process that happens to everyone. The importance of this research is because of sensitivity and importance of nursing in which the speech errors may be interfere in the treatment of patients, but unfortunately no research has been done yet in this field.This research has been done to study the factors (personality, stress, fatigue and insomnia which cause speech errors happen to nurses of Ilam province. Materials and Methods: The sample of this correlation-descriptive research consists of 50 nurses working in Mustafa Khomeini Hospital of Ilam province who were selected randomly. Our data were collected using The Minnesota Multiphasic Personality Inventory, NEO-Five Factor Inventory and Expanded Nursing Stress Scale, and were analyzed using SPSS version 20, descriptive, inferential and multivariate linear regression or two-variable statistical methods (with significant level: p≤0. 05. Results: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors Conclusion: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors.

  3. Neural correlates of quality perception for complex speech signals

    CERN Document Server

    Antons, Jan-Niklas

    2015-01-01

    This book interconnects two essential disciplines to study the perception of speech: Neuroscience and Quality of Experience, which to date have rarely been used together for the purposes of research on speech quality perception. In five key experiments, the book demonstrates the application of standard clinical methods in neurophysiology on the one hand, and of methods used in fields of research concerned with speech quality perception on the other. Using this combination, the book shows that speech stimuli with different lengths and different quality impairments are accompanied by physiological reactions related to quality variations, e.g., a positive peak in an event-related potential. Furthermore, it demonstrates that – in most cases – quality impairment intensity has an impact on the intensity of physiological reactions.

  4. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  5. Comparison of Speech Perception in Background Noise with Acceptance of Background Noise in Aided and Unaided Conditions.

    Science.gov (United States)

    Nabelek, Anna K.; Tampas, Joanna W.; Burchfield, Samuel B.

    2004-01-01

    l, speech perception in noiseBackground noise is a significant factor influencing hearing-aid satisfaction and is a major reason for rejection of hearing aids. Attempts have been made by previous researchers to relate the use of hearing aids to speech perception in noise (SPIN), with an expectation of improved speech perception followed by an…

  6. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within......The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...

  7. Speech intelligibility after gingivectomy of excess palatal tissue

    Directory of Open Access Journals (Sweden)

    Aruna Balasundaram

    2014-01-01

    Full Text Available To appreciate any enhancement in speech following gingivectomy of enlarged anterior palatal gingiva. Periodontal literature has documented various conditions, pathophysiology, and treatment modalities of gingival enlargement. Relationship between gingival maladies and speech alteration has received scant attention. This case report describes on altered speech pattern enhancement secondary to the gingivectomy procedure. A systemically healthy 24-year- female patient reported with bilateral anterior gingival enlargement who was provisionally diagnosed as "gingival abscess with inflammatory enlargement" in relation to palatal aspect of the right maxillary canine to left maxillary canine. Bilateral gingivectomy procedure was performed by external bevel incision in relation to anterior palatal gingiva and a large wedge of epithelium and connective tissue was removed. Patient and her close acquaintances noticed a great improvement in her pronunciation and enunciation of sounds like "t", "d", "n", "l", "th", following removal of excess gingival palatal tissue and was also appreciated with visual analog scale score. Exploration of linguistic research documented the significance of tongue-palate contact during speech. Any excess gingival tissue in palatal region brings about disruption in speech by altering tongue-palate contact. Periodontal surgery like gingivectomy may improve disrupted phonetics. Excess gingival palatal tissue impedes on tongue-palate contact and interferes speech. Pronunciation of consonants like "t", "d", "n", "l", "th", are altered with anterior enlarged palatal gingiva. Excision of the enlarged palatal tissue results in improvement of speech.

  8. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD).

    Science.gov (United States)

    Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela

    2015-03-01

    Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.

  9. Speech production, dual-process theory, and the attentive addressee

    OpenAIRE

    Pollard, A. J.

    2012-01-01

    This thesis outlines a model of Speaker-Addressee interaction that suggests some answers to two linked problems current in speech production. The first concerns an under-researched issue in psycholinguistics: how are decisions about speech content – conceptualization – carried out? The second, a pragmatics problem, asks how Speakers, working under the heavy time pressures of normal dialogue, achieve optimal relevance often enough for successful communication to take place. L...

  10. Seeing the person? Disability theories and speech and language therapy.

    Science.gov (United States)

    Jordan, L; Bryan, K

    2001-01-01

    The potential value of a framework enabling practitioners to conceptualise speech and language therapy from a range of perspectives engendered by different theories about disability is explored. Four disability research paradigms are used to categorise professional activities, whilst the 'individual' and 'social' models of disability are considered as alternative value systems. Challenges facing speech and language therapists in developing roles and services to embrace different perspectives are outlined.

  11. Novel candidate genes and regions for childhood apraxia of speech identified by array comparative genomic hybridization.

    Science.gov (United States)

    Laffin, Jennifer J S; Raca, Gordana; Jackson, Craig A; Strand, Edythe A; Jakielski, Kathy J; Shriberg, Lawrence D

    2012-11-01

    The goal of this study was to identify new candidate genes and genomic copy-number variations associated with a rare, severe, and persistent speech disorder termed childhood apraxia of speech. Childhood apraxia of speech is the speech disorder segregating with a mutation in FOXP2 in a multigenerational London pedigree widely studied for its role in the development of speech-language in humans. A total of 24 participants who were suspected to have childhood apraxia of speech were assessed using a comprehensive protocol that samples speech in challenging contexts. All participants met clinical-research criteria for childhood apraxia of speech. Array comparative genomic hybridization analyses were completed using a customized 385K Nimblegen array (Roche Nimblegen, Madison, WI) with increased coverage of genes and regions previously associated with childhood apraxia of speech. A total of 16 copy-number variations with potential consequences for speech-language development were detected in 12 or half of the 24 participants. The copy-number variations occurred on 10 chromosomes, 3 of which had two to four candidate regions. Several participants were identified with copy-number variations in two to three regions. In addition, one participant had a heterozygous FOXP2 mutation and a copy-number variation on chromosome 2, and one participant had a 16p11.2 microdeletion and copy-number variations on chromosomes 13 and 14. Findings support the likelihood of heterogeneous genomic pathways associated with childhood apraxia of speech.

  12. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  13. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  14. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  15. Speech impairment in Down syndrome: a review.

    Science.gov (United States)

    Kent, Ray D; Vorperian, Houri K

    2013-02-01

    This review summarizes research on disorders of speech production in Down syndrome (DS) for the purposes of informing clinical services and guiding future research. Review of the literature was based on searches using MEDLINE, Google Scholar, PsycINFO, and HighWire Press, as well as consideration of reference lists in retrieved documents (including online sources). Search terms emphasized functions related to voice, articulation, phonology, prosody, fluency, and intelligibility. The following conclusions pertain to four major areas of review: voice, speech sounds, fluency and prosody, and intelligibility. The first major area is voice. Although a number of studies have reported on vocal abnormalities in DS, major questions remain about the nature and frequency of the phonatory disorder. Results of perceptual and acoustic studies have been mixed, making it difficult to draw firm conclusions or even to identify sensitive measures for future study. The second major area is speech sounds. Articulatory and phonological studies show that speech patterns in DS are a combination of delayed development and errors not seen in typical development. Delayed (i.e., developmental) and disordered (i.e., nondevelopmental) patterns are evident by the age of about 3 years, although DS-related abnormalities possibly appear earlier, even in infant babbling. The third major area is fluency and prosody. Stuttering and/or cluttering occur in DS at rates of 10%-45%, compared with about 1% in the general population. Research also points to significant disturbances in prosody. The fourth major area is intelligibility. Studies consistently show marked limitations in this area, but only recently has the research gone beyond simple rating scales.

  16. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  17. Speech Training for Inmate Rehabilitation.

    Science.gov (United States)

    Parkinson, Michael G.; Dobkins, David H.

    1982-01-01

    Using a computerized content analysis, the authors demonstrate changes in speech behaviors of prison inmates. They conclude that two to four hours of public speaking training can have only limited effect on students who live in a culture in which "prison speech" is the expected and rewarded form of behavior. (PD)

  18. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  19. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) filtering. Next, the frequency ...

  20. Speech Prosody in Cerebellar Ataxia

    Science.gov (United States)

    Casper, Maureen A.; Raphael, Lawrence J.; Harris, Katherine S.; Geibel, Jennifer M.

    2007-01-01

    Persons with cerebellar ataxia exhibit changes in physical coordination and speech and voice production. Previously, these alterations of speech and voice production were described primarily via perceptual coordinates. In this study, the spatial-temporal properties of syllable production were examined in 12 speakers, six of whom were healthy…

  1. Assessment of speech intelligibility in background noise and reverberation

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo

    Reliable methods for assessing speech intelligibility are essential within hearing research, audiology, and related areas. Such methods can be used for obtaining a better understanding of how speech intelligibility is affected by, e.g., various environmental factors or different types of hearing...... impairment. In this thesis, two sentence-based tests for speech intelligibility in Danish were developed. The first test is the Conversational Language Understanding Evaluation (CLUE), which is based on the principles of the original American-English Hearing in Noise Test (HINT). The second test...... is a modified version of CLUE where the speech material and the scoring rules have been reconsidered. An extensive validation of the modified test was conducted with both normal-hearing and hearing-impaired listeners. The validation showed that the test produces reliable results for both groups of listeners...

  2. Hybrid methodological approach to context-dependent speech recognition

    Directory of Open Access Journals (Sweden)

    Dragiša Mišković

    2017-01-01

    Full Text Available Although the importance of contextual information in speech recognition has been acknowledged for a long time now, it has remained clearly underutilized even in state-of-the-art speech recognition systems. This article introduces a novel, methodologically hybrid approach to the research question of context-dependent speech recognition in human–machine interaction. To the extent that it is hybrid, the approach integrates aspects of both statistical and representational paradigms. We extend the standard statistical pattern-matching approach with a cognitively inspired and analytically tractable model with explanatory power. This methodological extension allows for accounting for contextual information which is otherwise unavailable in speech recognition systems, and using it to improve post-processing of recognition hypotheses. The article introduces an algorithm for evaluation of recognition hypotheses, illustrates it for concrete interaction domains, and discusses its implementation within two prototype conversational agents.

  3. Perceptual statistical learning over one week in child speech production.

    Science.gov (United States)

    Richtsmeier, Peter T; Goffman, Lisa

    2017-07-01

    What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  5. Neural tuning to low-level features of speech throughout the perisylvian cortex

    NARCIS (Netherlands)

    Berezutskaya, Y.; Freudenburg, Z.V.; Güçlü, U.; Gerven, M.A.J. van; Ramsey, N.F.

    2017-01-01

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus towards anterior superior

  6. Neural tuning to low-level features of speech throughout the perisylvian cortex

    NARCIS (Netherlands)

    Berezutskaya, Julia; Freudenburg, Zachary V.; Güçlü, Umut; van Gerven, Marcel A.J.; Ramsey, Nick F.

    2017-01-01

    Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior

  7. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  8. Analysis of glottal source parameters in Parkinsonian speech.

    Science.gov (United States)

    Hanratty, Jane; Deegan, Catherine; Walsh, Mary; Kirkpatrick, Barry

    2016-08-01

    Diagnosis and monitoring of Parkinson's disease has a number of challenges as there is no definitive biomarker despite the broad range of symptoms. Research is ongoing to produce objective measures that can either diagnose Parkinson's or act as an objective decision support tool. Recent research on speech based measures have demonstrated promising results. This study aims to investigate the characteristics of the glottal source signal in Parkinsonian speech. An experiment is conducted in which a selection of glottal parameters are tested for their ability to discriminate between healthy and Parkinsonian speech. Results for each glottal parameter are presented for a database of 50 healthy speakers and a database of 16 speakers with Parkinsonian speech symptoms. Receiver operating characteristic (ROC) curves were employed to analyse the results and the area under the ROC curve (AUC) values were used to quantify the performance of each glottal parameter. The results indicate that glottal parameters can be used to discriminate between healthy and Parkinsonian speech, although results varied for each parameter tested. For the task of separating healthy and Parkinsonian speech, 2 out of the 7 glottal parameters tested produced AUC values of over 0.9.

  9. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  10. The Effects of Peer Tutoring on University Students' Success, Speaking Skills and Speech Self-Efficacy in the Effective and Good Speech Course

    Science.gov (United States)

    Uzuner Yurt, Serap; Aktas, Elif

    2016-01-01

    In this study, the effects of the use of peer tutoring in Effective and Good Speech Course on students' success, perception of speech self-efficacy and speaking skills were examined. The study, designed as a mixed pattern in which quantitative and qualitative research approaches were combined, was carried out together with 57 students in 2014 to…

  11. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  12. Atypical speech lateralization in adults with developmental coordination disorder demonstrated using functional transcranial Doppler ultrasound

    OpenAIRE

    Hodgson, Jessica C.; Hudson, John M.

    2016-01-01

    Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, wh...

  13. Speech Remediation of Long-Term Stuttering

    Directory of Open Access Journals (Sweden)

    Betty L. McMicken

    2012-09-01

    Full Text Available This research article describes the remediation of moderate stuttering in an adult client who experienced speech dysfluency for more than 40 years. Treatment took place at an urban residential rehabilitation mission where the client was court sentenced for a history of felonies and current narcotic sales and use. In conjunction with the operant conditioning instruction of the rehabilitation mission, the Ryan Fluency Program was implemented along with the initial use of pause time in response to the complex needs of the client. The article provides an overview of the assessment (Fluency Interviews, Criterion Tests and treatment program. At present, 2.5 years post-initiation of treatment, the client has reported and been observed to have achieved smooth, forward-flowing, natural sounding speech throughout his work environment, family interaction, and daily life.

  14. Phonetic search methods for large speech databases

    CERN Document Server

    Moyal, Ami; Tetariy, Ella; Gishri, Michal

    2013-01-01

    “Phonetic Search Methods for Large Databases” focuses on Keyword Spotting (KWS) within large speech databases. The brief will begin by outlining the challenges associated with Keyword Spotting within large speech databases using dynamic keyword vocabularies. It will then continue by highlighting the various market segments in need of KWS solutions, as well as, the specific requirements of each market segment. The work also includes a detailed description of the complexity of the task and the different methods that are used, including the advantages and disadvantages of each method and an in-depth comparison. The main focus will be on the Phonetic Search method and its efficient implementation. This will include a literature review of the various methods used for the efficient implementation of Phonetic Search Keyword Spotting, with an emphasis on the authors’ own research which entails a comparative analysis of the Phonetic Search method which includes algorithmic details. This brief is useful for resea...

  15. 29 March 2011 - Ninth President of Israel S.Peres welcomed by CERN Director-General R. Heuer who introduces Council President M. Spiro, Director for Accelerators and Technology S. Myers, Head of International Relations F. Pauss, Physics Department Head P. Bloch, Technology Department Head F. Bordry, Human Resources Department Head A.-S. Catherin, Beams Department Head P. Collier, Information Technology Department Head F. Hemmer, Adviser for Israel J. Ellis, Legal Counsel E. Gröniger-Voss, ATLAS Collaboration Spokesperson F. Gianotti, Former ATLAS Collaboration Spokesperson P. Jenni, Weizmann Institute G. Mikenberg, CERN VIP and Protocol Officer W. Korda.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    During his visit he toured the ATLAS underground experimental area with Giora Mikenberg of the ATLAS collaboration, Weizmann Institute of Sciences and Israeli industrial liaison office, Rolf Heuer, CERN’s director-general, and Fabiola Gianotti, ATLAS spokesperson. The president also visited the CERN computing centre and met Israeli scientists working at CERN.

  16. Research

    African Journals Online (AJOL)

    The Rural Clinical School (RCS) of the Faculty of Medicine and Health. Sciences ... The NID offers hospitality courses, one of which is Professional Cookery. (PC). ... D/HL relates to medical, nursing, occupational therapy and speech therapy.

  17. Speech fluency profile on different tasks for individuals with Parkinson's disease.

    Science.gov (United States)

    Juste, Fabiola Staróbole; Andrade, Claudia Regina Furquim de

    2017-07-20

    To characterize the speech fluency profile of patients with Parkinson's disease. Study participants were 40 individuals of both genders aged 40 to 80 years divided into 2 groups: Research Group - RG (20 individuals with diagnosis of Parkinson's disease) and Control Group - CG (20 individuals with no communication or neurological disorders). For all of the participants, three speech samples involving different tasks were collected: monologue, individual reading, and automatic speech. The RG presented a significant larger number of speech disruptions, both stuttering-like and typical dysfluencies, and higher percentage of speech discontinuity in the monologue and individual reading tasks compared with the CG. Both groups presented reduced number of speech disruptions (stuttering-like and typical dysfluencies) in the automatic speech task; the groups presented similar performance in this task. Regarding speech rate, individuals in the RG presented lower number of words and syllables per minute compared with those in the CG in all speech tasks. Participants of the RG presented altered parameters of speech fluency compared with those of the CG; however, this change in fluency cannot be considered a stuttering disorder.

  18. 23 May 2016 - Signature of a MoU between the National Nuclear Research Center, Republic of Azerbaijan, and the ALICE Collaboration

    CERN Multimedia

    Bennett, Sophia Elizabeth

    2016-01-01

    From left to right: Head of the Nuclear Physics Department, National Nuclear Research Center A. Rustamov; Chairman, National Nuclear Research Center A. Garibov; Deputy Minister for Communication and High Technology of the Republic of Azerbaijan E. Velizadeh; CERN Director for Research and Computing E. Elsen; ALICE Collaboration Spokesperson P. Giubellino. Are also attending: Permanent Representative of the Republic of Azerbaijan to the United Nations Office and other international organizations in Geneva Ambassador V. Sadiqov and Director for International Relations C. Warakaulle.

  19. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  20. Speech enhancement using emotion dependent codebooks

    NARCIS (Netherlands)

    Naidu, D.H.R.; Srinivasan, S.

    2012-01-01

    Several speech enhancement approaches utilize trained models of clean speech data, such as codebooks, Gaussian mixtures, and hidden Markov models. These models are typically trained on neutral clean speech data, without any emotion. However, in practical scenarios, emotional speech is a common

  1. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  2. Is Birdsong More Like Speech or Music?

    Science.gov (United States)

    Shannon, Robert V

    2016-04-01

    Music and speech share many acoustic cues but not all are equally important. For example, harmonic pitch is essential for music but not for speech. When birds communicate is their song more like speech or music? A new study contrasting pitch and spectral patterns shows that birds perceive their song more like humans perceive speech. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Freedom of Speech Newsletter, September, 1975.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    The Freedom of Speech Newsletter is the communication medium for the Freedom of Speech Interest Group of the Western Speech Communication Association. The newsletter contains such features as a statement of concern by the National Ad Hoc Committee Against Censorship; Reticence and Free Speech, an article by James F. Vickrey discussing the subtle…

  4. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2004-04-20

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  5. Speech recovery device

    Energy Technology Data Exchange (ETDEWEB)

    Frankle, Christen M.

    2000-10-19

    There is provided an apparatus and method for assisting speech recovery in people with inability to speak due to aphasia, apraxia or another condition with similar effect. A hollow, rigid, thin-walled tube with semi-circular or semi-elliptical cut out shapes at each open end is positioned such that one end mates with the throat/voice box area of the neck of the assistor and the other end mates with the throat/voice box area of the assisted. The speaking person (assistor) makes sounds that produce standing wave vibrations at the same frequency in the vocal cords of the assisted person. Driving the assisted person's vocal cords with the assisted person being able to hear the correct tone enables the assisted person to speak by simply amplifying the vibration of membranes in their throat.

  6. Steganalysis of recorded speech

    Science.gov (United States)

    Johnson, Micah K.; Lyu, Siwei; Farid, Hany

    2005-03-01

    Digital audio provides a suitable cover for high-throughput steganography. At 16 bits per sample and sampled at a rate of 44,100 Hz, digital audio has the bit-rate to support large messages. In addition, audio is often transient and unpredictable, facilitating the hiding of messages. Using an approach similar to our universal image steganalysis, we show that hidden messages alter the underlying statistics of audio signals. Our statistical model begins by building a linear basis that captures certain statistical properties of audio signals. A low-dimensional statistical feature vector is extracted from this basis representation and used by a non-linear support vector machine for classification. We show the efficacy of this approach on LSB embedding and Hide4PGP. While no explicit assumptions about the content of the audio are made, our technique has been developed and tested on high-quality recorded speech.

  7. Rule-Based Storytelling Text-to-Speech (TTS Synthesis

    Directory of Open Access Journals (Sweden)

    Ramli Izzad

    2016-01-01

    Full Text Available In recent years, various real life applications such as talking books, gadgets and humanoid robots have drawn the attention to pursue research in the area of expressive speech synthesis. Speech synthesis is widely used in various applications. However, there is a growing need for an expressive speech synthesis especially for communication and robotic. In this paper, global and local rule are developed to convert neutral to storytelling style speech for the Malay language. In order to generate rules, modification of prosodic parameters such as pitch, intensity, duration, tempo and pauses are considered. Modification of prosodic parameters is examined by performing prosodic analysis on a story collected from an experienced female and male storyteller. The global and local rule is applied in sentence level and synthesized using HNM. Subjective tests are conducted to evaluate the synthesized storytelling speech quality of both rules based on naturalness, intelligibility, and similarity to the original storytelling speech. The results showed that global rule give a better result than local rule

  8. Speech pattern improvement following gingivectomy of excess palatal tissue.

    Science.gov (United States)

    Holtzclaw, Dan; Toscano, Nicholas

    2008-10-01

    Speech disruption secondary to excessive gingival tissue has received scant attention in periodontal literature. Although a few articles have addressed the causes of this condition, documentation and scientific explanation of treatment outcomes are virtually non-existent. This case report describes speech pattern improvements secondary to periodontal surgery and provides a concise review of linguistic and phonetic literature pertinent to the case. A 21-year-old white female with a history of gingival abscesses secondary to excessive palatal tissue presented for treatment. Bilateral gingivectomies of palatal tissues were performed with inverse bevel incisions extending distally from teeth #5 and #12 to the maxillary tuberosities, and large wedges of epithelium/connective tissue were excised. Within the first month of the surgery, the patient noted "changes in the manner in which her tongue contacted the roof of her mouth" and "changes in her speech." Further anecdotal investigation revealed the patient's enunciation of sounds such as "s," "sh," and "k" was greatly improved following the gingivectomy procedure. Palatometric research clearly demonstrates that the tongue has intimate contact with the lateral aspects of the posterior palate during speech. Gingival excess in this and other palatal locations has the potential to alter linguopalatal contact patterns and disrupt normal speech patterns. Surgical correction of this condition via excisional procedures may improve linguopalatal contact patterns which, in turn, may lead to improved patient speech.

  9. Lexical effects on speech production and intelligibility in Parkinson's disease

    Science.gov (United States)

    Chiu, Yi-Fang

    Individuals with Parkinson's disease (PD) often have speech deficits that lead to reduced speech intelligibility. Previous research provides a rich database regarding the articulatory deficits associated with PD including restricted vowel space (Skodda, Visser, & Schlegel, 2011) and flatter formant transitions (Tjaden & Wilding, 2004; Walsh & Smith, 2012). However, few studies consider the effect of higher level structural variables of word usage frequency and the number of similar sounding words (i.e. neighborhood density) on lower level articulation or on listeners' perception of dysarthric speech. The purpose of the study is to examine the interaction of lexical properties and speech articulation as measured acoustically in speakers with PD and healthy controls (HC) and the effect of lexical properties on the perception of their speech. Individuals diagnosed with PD and age-matched healthy controls read sentences with words that varied in word frequency and neighborhood density. Acoustic analysis was performed to compare second formant transitions in diphthongs, an indicator of the dynamics of tongue movement during speech production, across different lexical characteristics. Young listeners transcribed the spoken sentences and the transcription accuracy was compared across lexical conditions. The acoustic results indicate that both PD and HC speakers adjusted their articulation based on lexical properties but the PD group had significant reductions in second formant transitions compared to HC. Both groups of speakers increased second formant transitions for words with low frequency and low density, but the lexical effect is diphthong dependent. The change in second formant slope was limited in the PD group when the required formant movement for the diphthong is small. The data from listeners' perception of the speech by PD and HC show that listeners identified high frequency words with greater accuracy suggesting the use of lexical knowledge during the

  10. Segmentation cues in conversational speech: Robust semantics and fragile phonotactics

    Directory of Open Access Journals (Sweden)

    Laurence eWhite

    2012-10-01

    Full Text Available Multiple cues influence listeners’ segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker’s articulatory effort – hyperarticulation vs hypoarticulation (H&H – may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation. We examined how listeners’ interpretation of segmentation cues is affected by speech style (spontaneous conversation vs read, using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylised landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues – semantic likelihood and cross-boundary diphone phonotactics – was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech.Independent of speech style, we found an interaction between cue valence (favourable/unfavourable and cue type (phonotactics/semantics. Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styles was evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behaviour. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically-based cues in the segmentation of natural

  11. High-frequency energy in singing and speech

    Science.gov (United States)

    Monson, Brian Bruce

    While human speech and the human voice generate acoustical energy up to (and beyond) 20 kHz, the energy above approximately 5 kHz has been largely neglected. Evidence is accruing that this high-frequency energy contains perceptual information relevant to speech and voice, including percepts of quality, localization, and intelligibility. The present research was an initial step in the long-range goal of characterizing high-frequency energy in singing voice and speech, with particular regard for its perceptual role and its potential for modification during voice and speech production. In this study, a database of high-fidelity recordings of talkers was created and used for a broad acoustical analysis and general characterization of high-frequency energy, as well as specific characterization of phoneme category, voice and speech intensity level, and mode of production (speech versus singing) by high-frequency energy content. Directionality of radiation of high-frequency energy from the mouth was also examined. The recordings were used for perceptual experiments wherein listeners were asked to discriminate between speech and voice samples that differed only in high-frequency energy content. Listeners were also subjected to gender discrimination tasks, mode-of-production discrimination tasks, and transcription tasks with samples of speech and singing that contained only high-frequency content. The combination of these experiments has revealed that (1) human listeners are able to detect very subtle level changes in high-frequency energy, and (2) human listeners are able to extract significant perceptual information from high-frequency energy.

  12. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  13. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  14. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  15. Advocate: A Distributed Architecture for Speech-to-Speech Translation

    Science.gov (United States)

    2009-01-01

    tecture, are either wrapped natural-language processing ( NLP ) components or objects developed from scratch using the architecture’s API. GATE is...framework, we put together a demonstration Arabic -to- English speech translation system using both internally developed ( Arabic speech recognition and MT...conditions of our Arabic S2S demonstration system described earlier. Once again, the data size was varied and eighty identical requests were

  16. Assessment of Danish-speaking children’s phonological development and speech disorders

    DEFF Research Database (Denmark)

    Clausen, Marit Carolin; Fox-Boyer, Annette

    2018-01-01

    The identification of speech sounds disorders is an important everyday task for speech and language therapists (SLTs) working with children. Therefore, assessment tools are needed that are able to correctly identify and diagnose a child with a suspected speech disorder and furthermore, that provide...... of the existing speech assessments in Denmark showed that none of the materials fulfilled current recommendations identified in research literature. Therefore, the aim of this paper is to describe the evaluation of a newly constructed instrument for assessing the speech development and disorders of Danish...... with suspected speech disorder (Clausen and Fox-Boyer, in prep). The results indicated that the instrument showed strong inter-examiner reliability for both populations as well as a high content and diagnostic validity. Hence, the study showed that the LogoFoVa can be regarded as a reliable and valid tool...

  17. Speech perception at the interface of neurobiology and linguistics.

    Science.gov (United States)

    Poeppel, David; Idsardi, William J; van Wassenhove, Virginie

    2008-03-12

    Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.

  18. On the Perception of Speech Sounds as Biologically Significant Signals1,2

    Science.gov (United States)

    Pisoni, David B.

    2012-01-01

    This paper reviews some of the major evidence and arguments currently available to support the view that human speech perception may require the use of specialized neural mechanisms for perceptual analysis. Experiments using synthetically produced speech signals with adults are briefly summarized and extensions of these results to infants and other organisms are reviewed with an emphasis towards detailing those aspects of speech perception that may require some need for specialized species-specific processors. Finally, some comments on the role of early experience in perceptual development are provided as an attempt to identify promising areas of new research in speech perception. PMID:399200

  19. Analysis of vocal signal in its amplitude - time representation. speech synthesis-by-rules

    International Nuclear Information System (INIS)

    Rodet, Xavier

    1977-01-01

    In the first part of this dissertation, the natural speech production and the resulting acoustic waveform are examined under various aspects: communication, phonetics, frequency and temporal analysis. Our own study of direct signal is compared to other researches in these different fields, and fundamental features of vocal signals are described. The second part deals with the numerous methods already used for automatic text-to-speech synthesis. In the last part, we expose the new speech synthesis-by-rule methods that we have worked out, and we present in details the structure of the real-time speech synthesiser that we have implemented on a mini-computer. (author) [fr

  20. The abstract representations in speech processing.

    Science.gov (United States)

    Cutler, Anne

    2008-11-01

    Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.

  1. Speech and language intervention in bilinguals

    Directory of Open Access Journals (Sweden)

    Eliane Ramos

    2011-12-01

    Full Text Available Increasingly, speech and language pathologists (SLPs around the world are faced with the unique set of issues presented by their bilingual clients. Some professional associations in different countries have presented recommendations when assessing and treating bilingual populations. In children, most of the studies have focused on intervention for language and phonology/ articulation impairments and very few focus on stuttering. In general, studies of language intervention tend to agree that intervention in the first language (L1 either increase performance on L2 or does not hinder it. In bilingual adults, monolingual versus bilingual intervention is especially relevant in cases of aphasia; dysarthria in bilinguals has been barely approached. Most studies of cross-linguistic effects in bilingual aphasics have focused on lexical retrieval training. It has been noted that even though a majority of studies have disclosed a cross-linguistic generalization from one language to the other, some methodological weaknesses are evident. It is concluded that even though speech and language intervention in bilinguals represents a most important clinical area in speech language pathology, much more research using larger samples and controlling for potentially confounding variables is evidently required.

  2. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Perceptual Speech and Paralinguistic Skills of Adolescents with Williams Syndrome

    Science.gov (United States)

    Hargrove, Patricia M.; Pittelko, Stephen; Fillingane, Evan; Rustman, Emily; Lund, Bonnie

    2013-01-01

    The purpose of this research was to compare selected speech and paralinguistic skills of speakers with Williams syndrome (WS) and typically developing peers and to demonstrate the feasibility of providing preexisting databases to students to facilitate graduate research. In a series of three studies, conversational samples of 12 adolescents with…

  4. Speech Mannerisms: Games Clients Play

    Science.gov (United States)

    Morgan, Lewis B.

    1978-01-01

    This article focuses on speech mannerisms often employed by clients in a helping relationship. Eight mannerisms are presented and discussed, as well as possible interpretations. Suggestions are given to help counselors respond to them. (Author)

  5. Speech recognition from spectral dynamics

    Indian Academy of Sciences (India)

    Carrier nature of speech; modulation spectrum; spectral dynamics ... the relationships between phonetic values of sounds and their short-term spectral envelopes .... the number of free parameters that need to be estimated from training data.

  6. Designing speech for a recipient

    DEFF Research Database (Denmark)

    Fischer, Kerstin

    This study asks how speakers adjust their speech to their addressees, focusing on the potential roles of cognitive representations such as partner models, automatic processes such as interactive alignment, and social processes such as interactional negotiation. The nature of addressee orientation......, psycholinguistics and conversation analysis, and offers both overviews of child-directed, foreigner-directed and robot-directed speech and in-depth analyses of the processes involved in adjusting to a communication partner....

  7. National features of speech etiquette

    OpenAIRE

    Nacafova S.

    2017-01-01

    The article shows the differences between the speech etiquette of different peoples. The most important thing is to find a common language with this or that interlocutor. Knowledge of national etiquette, national character helps to learn the principles of speech of another nation. The article indicates in which cases certain forms of etiquette considered acceptable. At the same time, the rules of etiquette emphasized in the conduct of a dialogue in official meetings and for example, in the ex...

  8. Censored: Whistleblowers and impossible speech

    OpenAIRE

    Kenny, Kate

    2017-01-01

    What happens to a person who speaks out about corruption in their organization, and finds themselves excluded from their profession? In this article, I argue that whistleblowers experience exclusions because they have engaged in ‘impossible speech’, that is, a speech act considered to be unacceptable or illegitimate. Drawing on Butler’s theories of recognition and censorship, I show how norms of acceptable speech working through recruitment practices, alongside the actions of colleagues, can ...

  9. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  10. Introduction. The perception of speech: from sound to meaning.

    Science.gov (United States)

    Moore, Brian C J; Tyler, Lorraine K; Marslen-Wilson, William

    2008-03-12

    Spoken language communication is arguably the most important activity that distinguishes humans from non-human species. This paper provides an overview of the review papers that make up this theme issue on the processes underlying speech communication. The volume includes contributions from researchers who specialize in a wide range of topics within the general area of speech perception and language processing. It also includes contributions from key researchers in neuroanatomy and functional neuro-imaging, in an effort to cut across traditional disciplinary boundaries and foster cross-disciplinary interactions in this important and rapidly developing area of the biological and cognitive sciences.

  11. Cross-language and second language speech perception

    DEFF Research Database (Denmark)

    Bohn, Ocke-Schwen

    2017-01-01

    in cross-language and second language speech perception research: The mapping issue (the perceptual relationship of sounds of the native and the nonnative language in the mind of the native listener and the L2 learner), the perceptual and learning difficulty/ease issue (how this relationship may or may...... not cause perceptual and learning difficulty), and the plasticity issue (whether and how experience with the nonnative language affects the perceptual organization of speech sounds in the mind of L2 learners). One important general conclusion from this research is that perceptual learning is possible at all...

  12. 16 March 2009 - HRH Princess Maha Chakri Sirindhorn, Kingdom of Thailand, visiting CMS experimental area and LHC tunnel with Coordinator for external relations F. Pauss and Collaboration Spokesperson T. Virdee.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Photo 1: Relations with Non-Member State E. Tsesmelis, CMS Collaboration Spokesperson T. Virdee, HRH Princess Maha Chakri Sirindhorn and Coordinator for External relations F. Pauss, in CMS experimental area. Photo 2-12: Arrival of HRH at building 160: Posy presented to HRH by E. and F. Breedon; Welcome line: Director-General R. Heuer who introduces S. Bertolucci, F. Pauss, E. Tsesmelis, A. de Roeck, R. Breedon and Protocol Officer W. Korda. Photo 13-26:Presentation by Director-General R. Heuer and Head of Education R. Landua. Photo 27-30: Welcome at CMS by Spokesperson T. Virdee Photo 31-43: LHC tunnel visit Photo 44 - 60: CMS underground area visit Photo 61-63: HRH signs the guest book in the SCX5 conference room Photo 64-69: Signature of an expression of interest between SLRI and CMS Photo 75-88: Final discussion with Coordinator for External relation F. Pauss and Director-General R. Heuer.

  13. Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech

    Science.gov (United States)

    Betz, Stacy K.; Stoel-Gammon, Carol

    2005-01-01

    Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…

  14. A Demonstration Project of Speech Training for the Preschool Cleft Palate Child. Final Report.

    Science.gov (United States)

    Harrison, Robert J.

    To ascertain the efficacy of a program of language and speech stimulation for the preschool cleft palate child, a research and demonstration project was conducted using 137 subjects (ages 18 to 72 months) with defects involving the soft palate. Their language and speech skills were matched with those of a noncleft peer group revealing that the…

  15. Effects of Audio-Visual Information on the Intelligibility of Alaryngeal Speech

    Science.gov (United States)

    Evitts, Paul M.; Portugal, Lindsay; Van Dine, Ami; Holler, Aline

    2010-01-01

    Background: There is minimal research on the contribution of visual information on speech intelligibility for individuals with a laryngectomy (IWL). Aims: The purpose of this project was to determine the effects of mode of presentation (audio-only, audio-visual) on alaryngeal speech intelligibility. Method: Twenty-three naive listeners were…

  16. The Contributions of Speech Communication Scholarship to the Study of Terrorism: Review and Preview.

    Science.gov (United States)

    Dowling, Ralph E.

    Based on the premise that existing research into terrorism shows great promise, this paper notes that, despite widespread recognition of terrorism's communicative dimensions, few studies have been done from within the discipline of speech communication. The paper defines the discipline of speech communication and rhetorical studies, reviews the…

  17. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  18. Application of expressive speech in the TTS system with cepstral description

    Czech Academy of Sciences Publication Activity Database

    Přibil, Jiří; Přibilová, Anna

    -, č. 5042 (2008), s. 200-212 ISSN 0302-9743 R&D Projects: GA AV ČR 1QS108040569 Grant - others:MŠk(SK) 1/3107/06 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech synthesis * speech processing Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  19. Speech Language Assessments in Te Reo in a Primary School Maori Immersion Unit

    Science.gov (United States)

    Naidoo, Kershni

    2012-01-01

    This research originated from the need for a speech and language therapy assessment in te reo Maori for a particular child who attended a Maori immersion unit. A Speech and Language Therapy te reo assessment had already been developed but it needed to be revised and normative data collected. Discussions and assessments were carried out in a…

  20. Perceptions of Speech and Language Therapy Amongst UK School and College Students: Implications for Recruitment

    Science.gov (United States)

    Greenwood, Nan; Wright, Jannet A.; Bithell, Christine

    2006-01-01

    Background: Communication disorders affect both sexes and people from all ethnic groups, but members of minority ethnic groups and males in the UK are underrepresented in the speech and language therapy profession. Research in the area of recruitment is limited, but a possible explanation is poor awareness and understanding of speech and language…

  1. Dopamine Regulation of Human Speech and Bird Song: A Critical Review

    Science.gov (United States)

    Simonyan, Kristina; Horwitz, Barry; Jarvis, Erich D.

    2012-01-01

    To understand the neural basis of human speech control, extensive research has been done using a variety of methodologies in a range of experimental models. Nevertheless, several critical questions about learned vocal motor control still remain open. One of them is the mechanism(s) by which neurotransmitters, such as dopamine, modulate speech and…

  2. Reconceptualizing Practice with Multilingual Children with Speech Sound Disorders: People, Practicalities and Policy

    Science.gov (United States)

    Verdon, Sarah; McLeod, Sharynne; Wong, Sandie

    2015-01-01

    Background: The speech and language therapy profession is required to provide services to increasingly multilingual caseloads. Much international research has focused on the challenges of speech and language therapists' (SLTs) practice with multilingual children. Aims: To draw on the experience and knowledge of experts in the field to: (1)…

  3. Multilingual Aspects of Speech Sound Disorders in Children. Communication Disorders across Languages

    Science.gov (United States)

    McLeod, Sharynne; Goldstein, Brian

    2012-01-01

    Multilingual Aspects of Speech Sound Disorders in Children explores both multilingual and multicultural aspects of children with speech sound disorders. The 30 chapters have been written by 44 authors from 16 different countries about 112 languages and dialects. The book is designed to translate research into clinical practice. It is divided into…

  4. Profile of Australian Preschool Children with Speech Sound Disorders at Risk for Literacy Difficulties

    Science.gov (United States)

    McLeod, Sharynne; Crowe, Kathryn; Masso, Sarah; Baker, Elise; McCormack, Jane; Wren, Yvonne; Roulstone, Susan; Howland, Charlotte

    2017-01-01

    Speech sound disorders are a common communication difficulty in preschool children. Teachers indicate difficulty identifying and supporting these children. The aim of this research was to describe speech and language characteristics of children identified by their parents and/or teachers as having possible communication concerns. 275 Australian 4-…

  5. Intervention for Children with Severe Speech Disorder: A Comparison of Two Approaches

    Science.gov (United States)

    Crosbie, Sharon; Holm, Alison; Dodd, Barbara

    2005-01-01

    Background: Children with speech disorder are a heterogeneous group (e.g. in terms of severity, types of errors and underlying causal factors). Much research has ignored this heterogeneity, giving rise to contradictory intervention study findings. This situation provides clinical motivation to identify the deficits in the speech-processing chain…

  6. The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale

    Science.gov (United States)

    Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine

    2013-01-01

    Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…

  7. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  8. Non-right handed primary progressive apraxia of speech.

    Science.gov (United States)

    Botha, Hugo; Duffy, Joseph R; Whitwell, Jennifer L; Strand, Edythe A; Machulda, Mary M; Spychalla, Anthony J; Tosakulwong, Nirubol; Senjem, Matthew L; Knopman, David S; Petersen, Ronald C; Jack, Clifford R; Lowe, Val J; Josephs, Keith A

    2018-07-15

    In recent years a large and growing body of research has greatly advanced our understanding of primary progressive apraxia of speech. Handedness has emerged as one potential marker of selective vulnerability in degenerative diseases. This study evaluated the clinical and imaging findings in non-right handed compared to right handed participants in a prospective cohort diagnosed with primary progressive apraxia of speech. A total of 30 participants were included. Compared to the expected rate in the population, there was a higher prevalence of non-right handedness among those with primary progressive apraxia of speech (6/30, 20%). Small group numbers meant that these results did not reach statistical significance, although the effect sizes were moderate-to-large. There were no clinical differences between right handed and non-right handed participants. Bilateral hypometabolism was seen in primary progressive apraxia of speech compared to controls, with non-right handed participants showing more right hemispheric involvement. This is the first report of a higher rate of non-right handedness in participants with isolated apraxia of speech, which may point to an increased vulnerability for developing this disorder among non-right handed participants. This challenges prior hypotheses about a relative protective effect of non-right handedness for tau-related neurodegeneration. We discuss potential avenues for future research to investigate the relationship between handedness and motor disorders more generally. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  10. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  11. Variability and Intelligibility of Clarified Speech to Different Listener Groups

    Science.gov (United States)

    Silber, Ronnie F.

    Two studies examined the modifications that adult speakers make in speech to disadvantaged listeners. Previous research that has focused on speech to the deaf individuals and to young children has shown that adults clarify speech when addressing these two populations. Acoustic measurements suggest that the signal undergoes similar changes for both populations. Perceptual tests corroborate these results for the deaf population, but are nonsystematic in developmental studies. The differences in the findings for these populations and the nonsystematic results in the developmental literature may be due to methodological factors. The present experiments addressed these methodological questions. Studies of speech to hearing impaired listeners have used read, nonsense, sentences, for which speakers received explicit clarification instructions and feedback, while in the child literature, excerpts of real-time conversations were used. Therefore, linguistic samples were not precisely matched. In this study, experiments used various linguistic materials. Experiment 1 used a children's story; experiment 2, nonsense sentences. Four mothers read both types of material in four ways: (1) in "normal" adult speech, (2) in "babytalk," (3) under the clarification instructions used in the "hearing impaired studies" (instructed clear speech) and (4) in (spontaneous) clear speech without instruction. No extra practice or feedback was given. Sentences were presented to 40 normal hearing college students with and without simultaneous masking noise. Results were separately tabulated for content and function words, and analyzed using standard statistical tests. The major finding in the study was individual variation in speaker intelligibility. "Real world" speakers vary in their baseline intelligibility. The four speakers also showed unique patterns of intelligibility as a function of each independent variable. Results were as follows. Nonsense sentences were less intelligible than story

  12. Studies of Speech Disorders in Schizophrenia. History and State-of-the-art

    Directory of Open Access Journals (Sweden)

    Shedovskiy E. F.

    2015-08-01

    Full Text Available The article reviews studies of speech disorders in schizophrenia. The authors paid attention to a historical course and characterization of studies of areas: the actual psychopathological (speech disorders as a psychopathological symptoms, their description and taxonomy, psychological (isolated neurons and pathopsychological perspective analysis separately analyzed some modern foreign works, covering a variety of approaches to the study of speech disorders in the endogenous mental disorders. Disorders and features of speech are among the most striking manifestations of schizophrenia along with impaired thinking (Savitskaya A. V., Mikirtumov B. E.. With all the variety of symptoms, speech disorders in schizophrenia could be classified and organized. The few clinical psychological studies of speech activity in schizophrenia presented work on the study of generation and standard speech utterance; features verbal associative process, speed parameters of speech utterances. Special attention is given to integrated research in the mainstream of biological psychiatry and genetic trends. It is shown that the topic for more than a half-century history of originality of speech pathology in schizophrenia has received some coverage in the psychiatric and psychological literature and continues to generate interest in the modern integrated multidisciplinary approach

  13. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Crosslinguistic Application of English-Centric Rhythm Descriptors in Motor Speech Disorders

    Science.gov (United States)

    Liss, Julie M.; Utianski, Rene; Lansford, Kaitlin

    2014-01-01

    Background Rhythmic disturbances are a hallmark of motor speech disorders, in which the motor control deficits interfere with the outward flow of speech and by extension speech understanding. As the functions of rhythm are language-specific, breakdowns in rhythm should have language-specific consequences for communication. Objective The goals of this paper are to (i) provide a review of the cognitive- linguistic role of rhythm in speech perception in a general sense and crosslinguistically; (ii) present new results of lexical segmentation challenges posed by different types of dysarthria in American English, and (iii) offer a framework for crosslinguistic considerations for speech rhythm disturbances in the diagnosis and treatment of communication disorders associated with motor speech disorders. Summary This review presents theoretical and empirical reasons for considering speech rhythm as a critical component of communication deficits in motor speech disorders, and addresses the need for crosslinguistic research to explore language-universal versus language-specific aspects of motor speech disorders. PMID:24157596

  15. Acquired apraxia of speech: a review.

    Science.gov (United States)

    Knollman-Porter, Kelly

    2008-01-01

    Apraxia of speech (AOS) is an acquired adult neurogenic communication disorder that often occurs following stroke. The purpose of this article is to review current research studies addressing the diagnostic and therapeutic management of AOS. Traditional definitions and characteristics are compared with current features that assist in the differential diagnosis of AOS. Prognostic indicators are reviewed in addition to how neuroplasticity may impact treatment in chronic AOS. Treatment techniques discussed include the articulatory kinematic approach (AKA), use of augmentative/alternative communication devices, intersystemic facilitation/reorganization, and constraint-induced therapy. Finally, the need to address functional communication through support groups, outside the therapeutic environment, is discussed.

  16. 13 September 2013 - Chairman of the Board of Directors of the von Karman Institute Kingdom of Belgium J.-P. Contzen visiting the ATLAS experimental cavern with ATLAS Former Spokesperson P. Jenni; visiting the LHC tunnel at Point 1 with Technology Department N. Delruelle and signing the guest book with Technology Department Head F. Bordry. International Relations Adviser T. Kurtyka present.

    CERN Multimedia

    Laurent Egli (visit)

    2013-01-01

    13 September 2013 - Chairman of the Board of Directors of the von Karman Institute Kingdom of Belgium J.-P. Contzen visiting the ATLAS experimental cavern with ATLAS Former Spokesperson P. Jenni; visiting the LHC tunnel at Point 1 with Technology Department N. Delruelle and signing the guest book with Technology Department Head F. Bordry. International Relations Adviser T. Kurtyka present.

  17. 29 November 2013 - U. Humphrey Orjiako Nigerian Ambassador Extraordinary and Plenipotentiary Permanent Mission to the United Nations Office and other international organisations in Geneva signing the Guest Book with Head of International Relations R. Voss, visiting the LHC tunnel at Point 2 and the ALICE cavern with ALICE Collaboration Deputy Spokesperson Y. Schutz.

    CERN Multimedia

    Noemi Caraban

    2013-01-01

    29 November 2013 - U. Humphrey Orjiako Nigerian Ambassador Extraordinary and Plenipotentiary Permanent Mission to the United Nations Office and other international organisations in Geneva signing the Guest Book with Head of International Relations R. Voss, visiting the LHC tunnel at Point 2 and the ALICE cavern with ALICE Collaboration Deputy Spokesperson Y. Schutz.

  18. 27 February 2012- Thai Minister of Science and Technology P. Suraswadi with International Relations Adviser E. Tsesmelis and CMS Collaboration Former Deputy Spokesperson A. De Roeck signing the guest book in the 6th floor conference room, building 60 and visiting CMS underground experimental area at LHC Point 5.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    27 February 2012- Thai Minister of Science and Technology P. Suraswadi with International Relations Adviser E. Tsesmelis and CMS Collaboration Former Deputy Spokesperson A. De Roeck signing the guest book in the 6th floor conference room, building 60 and visiting CMS underground experimental area at LHC Point 5.

  19. 10 January 2011 - Former Minister of Science and Technology Honorary Member of the National Academy of Engineering of Korea J.-U.SEO in the CMS underground experimental area with Deputy Spokesperson J. Incandela, Former Adviser D. Blechschmidt and Adviser R. Voss.

    CERN Multimedia

    Maximilien brice

    2011-01-01

    10 January 2011 - Former Minister of Science and Technology Honorary Member of the National Academy of Engineering of Korea J.-U.SEO in the CMS underground experimental area with Deputy Spokesperson J. Incandela, Former Adviser D. Blechschmidt and Adviser R. Voss.

  20. 9th January 2012 - Indonesian Extraordinary and Plenipotentiary Ambassador Triansyah Djani to to the United Nations, WTO and other International Organisations in Geneva signing the guest book with Head of International Relations F. Pauss and Adviser E. Tsesmelis, visiting the LHC tunnel at Point 5 and CMS underground experimental area with Collaboration Spokesperson J. Incandela.

    CERN Document Server

    Estelle Spirig

    2012-01-01

    9th January 2012 - Indonesian Extraordinary and Plenipotentiary Ambassador Triansyah Djani to to the United Nations, WTO and other International Organisations in Geneva signing the guest book with Head of International Relations F. Pauss and Adviser E. Tsesmelis, visiting the LHC tunnel at Point 5 and CMS underground experimental area with Collaboration Spokesperson J. Incandela.

  1. 8 May 2013 - Swedish European Spallation Source Chief Executive Officer J. H. Yeck in the ATLAS visitor centre and experimental cavern with Collaboration Spokesperson D. Charlton (also present M. Nessi, R. Garoby and E. Tsesmelis); signing the guest book with International Relations Adviser E. Tsesmelis.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    8 May 2013 - Swedish European Spallation Source Chief Executive Officer J. H. Yeck in the ATLAS visitor centre and experimental cavern with Collaboration Spokesperson D. Charlton (also present M. Nessi, R. Garoby and E. Tsesmelis); signing the guest book with International Relations Adviser E. Tsesmelis.

  2. 15 January 2010 - Vice-Chancellor & Chief Executive C. Snowden, University of Surrey, United Kingdom and Mrs Snowden visiting ALICE exhibition and experimental undeground area with Collabortion Spokesperson J. Schukraft and Beams Department Head P. Collier; Signature of the guest book with CERN Director-General R. Heuer.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    15 January 2010 - Vice-Chancellor & Chief Executive C. Snowden, University of Surrey, United Kingdom and Mrs Snowden visiting ALICE exhibition and experimental undeground area with Collabortion Spokesperson J. Schukraft and Beams Department Head P. Collier; Signature of the guest book with CERN Director-General R. Heuer.

  3. 10 February 2012 - Permanent Representative of the Republic of India to the Conference on Disarmament, United Nations Office at Geneva Ambassador Mehta signing the guest book with International Relations Adviser R. Voss;in the LHC tunnel at Point 2 and ALICE underground experimental area with Collaboration Deputy Spokesperson Y. Schutz.

    CERN Document Server

    Maximilien Brice

    2012-01-01

    10 February 2012 - Permanent Representative of the Republic of India to the Conference on Disarmament, United Nations Office at Geneva Ambassador Mehta signing the guest book with International Relations Adviser R. Voss;in the LHC tunnel at Point 2 and ALICE underground experimental area with Collaboration Deputy Spokesperson Y. Schutz.

  4. 17 October 2013 - C. Ashton High Representative of the European Union for Foreign Affairs and Security Policy, Vice-President of the European Commission visiting the ATLAS cavern with ATLAS Collaboration Spokesperson D. Charlton; visiting the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the Guest book with CERN Director-General R. Heuer.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    17 October 2013 - C. Ashton High Representative of the European Union for Foreign Affairs and Security Policy, Vice-President of the European Commission visiting the ATLAS cavern with ATLAS Collaboration Spokesperson D. Charlton; visiting the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the Guest book with CERN Director-General R. Heuer.

  5. 5 June 2013 - European Union Ambassador to Switzerland and the Principality of Liechtenstein R. Jones in the ATLAS cavern with ATLAS Collaboration Deputy Spokesperson T. Wengler, in the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the guest book with Director-General R. Heuer. Head of the EU Projects Office S. Stavrev present.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    5 June 2013 - European Union Ambassador to Switzerland and the Principality of Liechtenstein R. Jones in the ATLAS cavern with ATLAS Collaboration Deputy Spokesperson T. Wengler, in the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the guest book with Director-General R. Heuer. Head of the EU Projects Office S. Stavrev present.

  6. 1 April 2011 - Croatian Rudjer Boskovic Institute (RBI)Director-General D. Ramljak visiting CMS Control Centre in Meyrin with Collaboration Spokesperson G. Tonelli; signing the guest book with Head of International Relations F. Pauss and visiting LHC superconducting magnet test hall with L. Walckiers.

    CERN Multimedia

    Maximilien brice

    2011-01-01

    1 April 2011 - Croatian Rudjer Boskovic Institute (RBI)Director-General D. Ramljak visiting CMS Control Centre in Meyrin with Collaboration Spokesperson G. Tonelli; signing the guest book with Head of International Relations F. Pauss and visiting LHC superconducting magnet test hall with L. Walckiers.

  7. 18 January 2011 - The British Royal Academy of Engineering in the LHC tunnel with CMS Collaboration Spokesperson G. Tonelli and Beams Department Head P. Collier; in the CERN Control Centre with P. Collier and LHC superconducting magnet test hall with Technology Department Head F. Bordry.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    18 January 2011 - The British Royal Academy of Engineering in the LHC tunnel with CMS Collaboration Spokesperson G. Tonelli and Beams Department Head P. Collier; in the CERN Control Centre with P. Collier and LHC superconducting magnet test hall with Technology Department Head F. Bordry.

  8. 30 August 2011 - Médecins sans frontières International President U. K Karunakara signing the guest book with Head of International Relations F. Pauss and Adviser for Life Sciences M. Dosanjh; visiting CMS underground experimental area with Collaboration Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    30 August 2011 - Médecins sans frontières International President U. K Karunakara signing the guest book with Head of International Relations F. Pauss and Adviser for Life Sciences M. Dosanjh; visiting CMS underground experimental area with Collaboration Spokesperson G. Tonelli.

  9. 23rd June 2010 - University of Bristol Head of the Aerospace Engineering Department and Professor of Aerospace Dynamics N. Lieven visiting CERN control centre with Beams Department Head P. Collier, visiting the LHC superconducting magnet test hall with R. Veness and CMS control centre with Collaboration Spokesperson G. Tonelli and CMS User J. Goldstein.

    CERN Multimedia

    Jean-Claude Gadmer

    2010-01-01

    23rd June 2010 - University of Bristol Head of the Aerospace Engineering Department and Professor of Aerospace Dynamics N. Lieven visiting CERN control centre with Beams Department Head P. Collier, visiting the LHC superconducting magnet test hall with R. Veness and CMS control centre with Collaboration Spokesperson G. Tonelli and CMS User J. Goldstein.

  10. 14 November 2013 - Director of Indian Institute of Technology Indore P. Mathur with members of the Indian community working at CERN; visiting the LHC tunnel at Point 2, the ALICE experimental area and SM18 with ALICE Collaboration Spokesperson, Istituto Nazionale Fisica Nucleare P. Giubellino and Technology Department, Accelerator Beam Transfer Group Leader V. Mertens

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    14 November 2013 - Director of Indian Institute of Technology Indore P. Mathur with members of the Indian community working at CERN; visiting the LHC tunnel at Point 2, the ALICE experimental area and SM18 with ALICE Collaboration Spokesperson, Istituto Nazionale Fisica Nucleare P. Giubellino and Technology Department, Accelerator Beam Transfer Group Leader V. Mertens

  11. 20th May 2010 - Malaysian Minister for Science, Technology and Innovation H. F: B. H. Yusof signing the guest book with Coordinator for External Relations F. Pauss and CMS Collaboration Deputy Spokesperson A. De Roeck; visiting the LHC superconducting magnet test hall with Technology Department Head F. Bordry; throughout accompanied by CERN Advisers J. Ellis and E. Tsesmelis.

    CERN Document Server

    Maximilien brice

    2010-01-01

    20th May 2010 - Malaysian Minister for Science, Technology and Innovation H. F: B. H. Yusof signing the guest book with Coordinator for External Relations F. Pauss and CMS Collaboration Deputy Spokesperson A. De Roeck; visiting the LHC superconducting magnet test hall with Technology Department Head F. Bordry; throughout accompanied by CERN Advisers J. Ellis and E. Tsesmelis.

  12. 21 January 2008 - Vice-President of the Human Rights Commission Z. Muhsin Al Hussein, Ambassador to United Nations A. Attar and their delegation from Saudi Arabia, visiting the ATLAS experimental cavern with Collaboration Spokesperson P. Jenni and Technical Coordinator M. Nessi.

    CERN Multimedia

    Claudia Marcelloni

    2008-01-01

    21 January 2008 - Vice-President of the Human Rights Commission Z. Muhsin Al Hussein, Ambassador to United Nations A. Attar and their delegation from Saudi Arabia, visiting the ATLAS experimental cavern with Collaboration Spokesperson P. Jenni and Technical Coordinator M. Nessi.

  13. 22nd September 2010 - Korean Minister of Education, Science and Technology J.-H. Lee signing the guest book and exchanging gifts with CERN Director-General R. Heuer and Head of International Relations F. Pauss; visiting ALICE exhibition with Collaboration Spokesperson J. Schukraft; accompanied throughout by Adviser R. Voss.

    CERN Multimedia

    Teams : M. Brice ; JC Gadmer

    2010-01-01

    22nd September 2010 - Korean Minister of Education, Science and Technology J.-H. Lee signing the guest book and exchanging gifts with CERN Director-General R. Heuer and Head of International Relations F. Pauss; visiting ALICE exhibition with Collaboration Spokesperson J. Schukraft; accompanied throughout by Adviser R. Voss.

  14. 23rd June 2011 . US NASA Administrator General C. Bolden visiting the AMS control room with Collaboration Spokesperson S. Ting and CERN Director-General R. Heuer; Tree planting ceremony in front of building 946, Prevessin site, hosting the AMS control room (CERN-HI-1106159 01 -87)

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    88-115: visiting CMS control centre in Meyrin with Collaboration Spokesperson-elect J. Incandela and on-shift scientists accompanied by Head of International Relations F. Pauss; 116-119: signature of photographs at the main building steps 120-136: with First Swiss Astronaut C. Nicollier at the main building steps.

  15. 31st August 2011 - Government of Japan R. Chubachi, Executive Member of the Council for Science and Technology Policy, Cabinet Office, Vice Chairman, Representative Corporate Executive Officer and Member of the Board, Sony Corporation, visiting the ATLAS experimental area with Former Collaboration Spokesperson P. Jenni and Senior physicist T. Kondo.

    CERN Multimedia

    Raphaël Piguet

    2011-01-01

    31st August 2011 - Government of Japan R. Chubachi, Executive Member of the Council for Science and Technology Policy, Cabinet Office, Vice Chairman, Representative Corporate Executive Officer and Member of the Board, Sony Corporation, visiting the ATLAS experimental area with Former Collaboration Spokesperson P. Jenni and Senior physicist T. Kondo.

  16. Chairman of the DELL Board of Directors and Chief Executive Officer Michael S. Dell with CERN Director-General R. Heuer and in front of the ATLAS detector (centre) with ATLAS Deputy Spokesperson A. Lankford (left) and Information Technology Department Head F. Hemmer on 26th January 2010.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    Chairman of the DELL Board of Directors and Chief Executive Officer Michael S. Dell with CERN Director-General R. Heuer and in front of the ATLAS detector (centre) with ATLAS Deputy Spokesperson A. Lankford (left) and Information Technology Department Head F. Hemmer on 26th January 2010.

  17. 6 November 2013 - Permanent Representative of Chile to the United Nations Office and Other international organizations in Geneva Ambassador J. Balmaceda Serigos signing the guest book with Adviser for Latin America J. Salicio Diez; visiting the ATLAS experimental cavern with Spokesperson D. Charlton (Spouse, Son and First Secretary present).

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    6 November 2013 - Permanent Representative of Chile to the United Nations Office and Other international organizations in Geneva Ambassador J. Balmaceda Serigos signing the guest book with Adviser for Latin America J. Salicio Diez; visiting the ATLAS experimental cavern with Spokesperson D. Charlton (Spouse, Son and First Secretary present).

  18. 18 June 2012 - DST Global Founder Y. Milner signing the guest book with Head of International Relations F. Pauss; visiting the AD facility in building 193 with AEGIS Collaboration Spokesperson M. Doser and Adviser for the Russian Federation T. Kurtyka. Managing Director I. Osborne also present with Mrs J. Milner and DST Global A. Lebedkina.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    18 June 2012 - DST Global Founder Y. Milner signing the guest book with Head of International Relations F. Pauss; visiting the AD facility in building 193 with AEGIS Collaboration Spokesperson M. Doser and Adviser for the Russian Federation T. Kurtyka. Managing Director I. Osborne also present with Mrs J. Milner and DST Global A. Lebedkina.

  19. 16 December 2013 - Hooke Professor of Experimental Physics and Pro Vice Chancellor University of Oxford Prof. I. Walmsley visiting the ATLAS cavern with ATLAS Collaboration Deputy Spokesperson T. Wengler, Physics Department, ATLAS Collaboration P. Wells and Chair, CMS Collaboration Board, Oxford University and Purdue University I. Shipsey

    CERN Document Server

    Anna Pantelia

    2013-01-01

    16 December 2013 - Hooke Professor of Experimental Physics and Pro Vice Chancellor University of Oxford Prof. I. Walmsley visiting the ATLAS cavern with ATLAS Collaboration Deputy Spokesperson T. Wengler, Physics Department, ATLAS Collaboration P. Wells and Chair, CMS Collaboration Board, Oxford University and Purdue University I. Shipsey

  20. 11 August 2008 - Member of the House of Councillors M. Naito (The National Diet of Japan, The Democratic Party of Japan) visiting the ATLAS experiment control room with Collaboration Spokesperson P. Jenni and ATLAS Muon Project Leader G. Mikenberg. Family photograph with CERN Japanese scientists in front of the ATLAS surface building.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    11 August 2008 - Member of the House of Councillors M. Naito (The National Diet of Japan, The Democratic Party of Japan) visiting the ATLAS experiment control room with Collaboration Spokesperson P. Jenni and ATLAS Muon Project Leader G. Mikenberg. Family photograph with CERN Japanese scientists in front of the ATLAS surface building.

  1. 25 June 2010 - Founder Chairman of the Japanese Science and Technology in Society Forum K. Omi signing the guest book with Head of International Relations F. Pauss, Adviser J. Ellis and Director-General R. Heuer; in the ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    2010-01-01

    25 June 2010 - Founder Chairman of the Japanese Science and Technology in Society Forum K. Omi signing the guest book with Head of International Relations F. Pauss, Adviser J. Ellis and Director-General R. Heuer; in the ATLAS visitor centre with Former Collaboration Spokesperson P. Jenni.

  2. 18 January 2011 - Ing. Vittorio Malacalza, ASG Superconductors S.p.A, Italy in the LHC superconducting magnet test hall with Deputy Department Head L. Rossi, in the LHC tunnel at Point 5 and CMS experimental area with Spokesperson G. Tonelli.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    18 January 2011 - Ing. Vittorio Malacalza, ASG Superconductors S.p.A, Italy in the LHC superconducting magnet test hall with Deputy Department Head L. Rossi, in the LHC tunnel at Point 5 and CMS experimental area with Spokesperson G. Tonelli.

  3. 8 April 2011 - Brazilian Minister of State for Science and Technology A. Mercadante Oliva signing the guest book with CERN Director-General R. Heuer and Head of International Relations F. Pauss; in the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni; visiting LHC superconducting magnet test hall with J.M. Jimenez.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    8 April 2011 - Brazilian Minister of State for Science and Technology A. Mercadante Oliva signing the guest book with CERN Director-General R. Heuer and Head of International Relations F. Pauss; in the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni; visiting LHC superconducting magnet test hall with J.M. Jimenez.

  4. Ian Taylor MBE MP Chairman Parliamentary and Scientific Committee, United Kingdom (second from left) with (from left to right) CMS Technical Coordinator A. Ball, CMS Spokesperson Tejinder (Jim) Virdee and Adviser to the Director-General J. Ellis on 2 November 2009.

    CERN Multimedia

    Maximilien Brice; CMS

    2009-01-01

    Ian Taylor MBE MP Chairman Parliamentary and Scientific Committee, United Kingdom (second from left) with (from left to right) CMS Technical Coordinator A. Ball, CMS Spokesperson Tejinder (Jim) Virdee and Adviser to the Director-General J. Ellis on 2 November 2009.

  5. 28 June 2012 - Ambassador I. Piperkov, Permanent Representative of Bulgaria to the United Nations Office and other international organisations in Geneva and Spouse visiting CMS experimental area with Collaboration Deputy Spokesperson T. Camporesi and CERN Control Centre with M. Benedikt.Senior physicist L. Litov accompanies the delegation throughout.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    28 June 2012 - Ambassador I. Piperkov, Permanent Representative of Bulgaria to the United Nations Office and other international organisations in Geneva and Spouse visiting CMS experimental area with Collaboration Deputy Spokesperson T. Camporesi and CERN Control Centre with M. Benedikt.Senior physicist L. Litov accompanies the delegation throughout.

  6. 27 June 2012 - Ambassador K. Pierce, Permanent Representative of the United Kingdom of Great Britain and Northern Ireland to the United Nations Office and other international organisations in Geneva visiting the LHC tunnel at Point 5 with Department Head P. Collier and CMS control room with Former Collaboration Spokesperson J. Virdee.

    CERN Multimedia

    Laurent Egli

    2012-01-01

    27 June 2012 - Ambassador K. Pierce, Permanent Representative of the United Kingdom of Great Britain and Northern Ireland to the United Nations Office and other international organisations in Geneva visiting the LHC tunnel at Point 5 with Department Head P. Collier and CMS control room with Former Collaboration Spokesperson J. Virdee.

  7. 13 February 2012 - World Economic Forum Founder and Executive Chairman K. Schwab and Chairperson and Co-Founder Schwab Foundation for Social Entrepreneurship H. Schwab (Mrs)in the ATLAS experimental area at LHC Point 1 with Collaboration Former Spokesperson P. Jenni; signing the guest book with CERN Director-General R. Heuer and Head of International Relations F. Pauss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    13 February 2012 - World Economic Forum Founder and Executive Chairman K. Schwab and Chairperson and Co-Founder Schwab Foundation for Social Entrepreneurship H. Schwab (Mrs)in the ATLAS experimental area at LHC Point 1 with Collaboration Former Spokesperson P. Jenni; signing the guest book with CERN Director-General R. Heuer and Head of International Relations F. Pauss.

  8. 12 December 2013 - Sir Konstantin Novoselov, Nobel Prize in Physics 2010, signing the guest book with International Relations Adviser E. Tsesmelis; visiting the ATLAS experimental cavern with Spokesperson D. Charlton; in the LHC tunnel with Technology Department Head F. Bordry. I. Antoniadis, CERN Theory Group Leader, accompanies throughout.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    12 December 2013 - Sir Konstantin Novoselov, Nobel Prize in Physics 2010, signing the guest book with International Relations Adviser E. Tsesmelis; visiting the ATLAS experimental cavern with Spokesperson D. Charlton; in the LHC tunnel with Technology Department Head F. Bordry. I. Antoniadis, CERN Theory Group Leader, accompanies throughout.

  9. 16 Augur 2013 -Bulgarian Minister of Education and Sciences A. Klisarova visiting the LHC tunnel with S. Russenschuck and CMS experimental cavern with Deputy Spokesperson T. Camporesi and V. Genchev ; signing the guest book with CERN Director-General R. Heuer. Accompanied throughout by P. Hristov, L. Litov, R. Voss and Z. Zaharieva.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    16 Augur 2013 -Bulgarian Minister of Education and Sciences A. Klisarova visiting the LHC tunnel with S. Russenschuck and CMS experimental cavern with Deputy Spokesperson T. Camporesi and V. Genchev ; signing the guest book with CERN Director-General R. Heuer. Accompanied throughout by P. Hristov, L. Litov, R. Voss and Z. Zaharieva.

  10. 23rd June 2010 - Australian Nuclear Science and Technology Organization Chief Executive Officer A. Paterson signing a Joint Statement of Intent and the guest book with CERN Director-General R. Heuer; in the ATLAS visitor centre and control room with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    23rd June 2010 - Australian Nuclear Science and Technology Organization Chief Executive Officer A. Paterson signing a Joint Statement of Intent and the guest book with CERN Director-General R. Heuer; in the ATLAS visitor centre and control room with Former Collaboration Spokesperson P. Jenni.

  11. 7 May 2013 - Ambassador of the Federal Republic of Germany to Switzerland and Liechtenstein P. Gottwald and Mrs Gottwald in the ATLAS experimental cavern and LHC tunnel with Collaboration Deputy Spokesperson T. Wengler and German Scientists A. Schopper and V. Mertens.

    CERN Multimedia

    Maximilien Brice

    2013-01-01

    7 May 2013 - Ambassador of the Federal Republic of Germany to Switzerland and Liechtenstein P. Gottwald and Mrs Gottwald in the ATLAS experimental cavern and LHC tunnel with Collaboration Deputy Spokesperson T. Wengler and German Scientists A. Schopper and V. Mertens.

  12. 16 December 2013 - P. Lavie President of the Technion Institute of Technology in Israel visiting the ATLAS cavern with ATLAS Deputy Spokesperson T. Wengler; visiting the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the Guest Book with CERN Director-General R. Heuer. G. Mikenberg, E. Rabinovici, Y. Rozen and S. Tarem present throughout.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    16 December 2013 - P. Lavie President of the Technion Institute of Technology in Israel visiting the ATLAS cavern with ATLAS Deputy Spokesperson T. Wengler; visiting the LHC tunnel at Point 1 with Technology Department Head F. Bordry and signing the Guest Book with CERN Director-General R. Heuer. G. Mikenberg, E. Rabinovici, Y. Rozen and S. Tarem present throughout.

  13. H.E. Dr Danilo Türk President of the Republic of Slovenia (second from right) visiting the ATLAS detector with, from left to right, Ambassador A. Logar, Spokesperson F. Gianotti, Director-General R. Heuer, First Lady B. Miklič Türk and ATLAS Slovenian national contactperson M. Mikuz.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    H.E. Dr Danilo Türk President of the Republic of Slovenia (second from right) visiting the ATLAS detector with, from left to right, Ambassador A. Logar, Spokesperson F. Gianotti, Director-General R. Heuer, First Lady B. Miklič Türk and ATLAS Slovenian national contactperson M. Mikuz.

  14. 9 April 2013 - Minister for Universities and Science United Kingdom of Great Britain and Northern Ireland D. Willetts in the ATLAS experimental cavern with ATLAS Collaboration Spokesperson D. Charlton and in the LHC tunnel at Point 1 with Beams Department Head P. Collier. Director for Accelerators and Technology S. Myers, Editor at the Communication Group K. Kahle and Beams Department Engineer R. Veness present.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    9 April 2013 - Minister for Universities and Science United Kingdom of Great Britain and Northern Ireland D. Willetts in the ATLAS experimental cavern with ATLAS Collaboration Spokesperson D. Charlton and in the LHC tunnel at Point 1 with Beams Department Head P. Collier. Director for Accelerators and Technology S. Myers, Editor at the Communication Group K. Kahle and Beams Department Engineer R. Veness present.

  15. 8 April 2013 - Indian Hon'ble Minister for Ministry of Science & Technology and Ministry of Earth Sciences Shri Sudini Jaipal Reddy in the LHC tunnel with K. Foraz, visiting the CMS cavern with Technical Coordinator A. Ball and Former Spokesperson T. Virdee, signing the guest book with Director-General R. Heuer.

    CERN Multimedia

    Samuel Morier-Genoud

    2013-01-01

    8 April 2013 - Indian Hon'ble Minister for Ministry of Science & Technology and Ministry of Earth Sciences Shri Sudini Jaipal Reddy in the LHC tunnel with K. Foraz, visiting the CMS cavern with Technical Coordinator A. Ball and Former Spokesperson T. Virdee, signing the guest book with Director-General R. Heuer.

  16. The Honourable Lawrence Gonzi Prime Minister of Malta visiting CMS experiment on 10 January 2008, from left to right Ministry of Finance Permanent Secretary A. Camilleri, Ambassador V. Camilleri, Maltese Representative at CERN N. Sammut, Prime Minister L. Gonzi, CMS Collaboration Spokesperson T. Virdee, CERN Director-General R. Aymar, University of Malta Rector J. Camilleri, Adviser to Director-General E. Tsesmelis.

    CERN Multimedia

    Maximilien Brice

    2008-01-01

    The Honourable Lawrence Gonzi Prime Minister of Malta visiting CMS experiment on 10 January 2008, from left to right Ministry of Finance Permanent Secretary A. Camilleri, Ambassador V. Camilleri, Maltese Representative at CERN N. Sammut, Prime Minister L. Gonzi, CMS Collaboration Spokesperson T. Virdee, CERN Director-General R. Aymar, University of Malta Rector J. Camilleri, Adviser to Director-General E. Tsesmelis.

  17. 23 July - Italian Director-General for Prevention G. Ruocco and Director-General for European and International Relations Ministry of Health D. Roderigo visiting the ATLAS experimental cavern with ATLAS Deputy Spokesperson B. Heinemann. Life Sciences Section M. Cirilli and Life Sciences Adviser M. Dosanjh present.

    CERN Multimedia

    Anna Pantelia

    2013-01-01

    23 July - Italian Director-General for Prevention G. Ruocco and Director-General for European and International Relations Ministry of Health D. Roderigo visiting the ATLAS experimental cavern with ATLAS Deputy Spokesperson B. Heinemann. Life Sciences Section M. Cirilli and Life Sciences Adviser M. Dosanjh present.

  18. 4th February 2011 - Austrian Academy of Sciences President H. Denk visiting CMS underground area with Collaboration Spokesperson G. Tonelli, Austrian Academy of Sciences Secretary General A. Suppan, CERN Head of International Relations F. Pauss and Director, High Energy Physics Laboratory, Austrian Academy of Sciences C Fabjan.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    4th February 2011 - Austrian Academy of Sciences President H. Denk visiting CMS underground area with Collaboration Spokesperson G. Tonelli, Austrian Academy of Sciences Secretary General A. Suppan, CERN Head of International Relations F. Pauss and Director, High Energy Physics Laboratory, Austrian Academy of Sciences C Fabjan.

  19. 30 January 2012 - Ecuadorian Ambassador Gallegos Chiriboga, Permanent Representative to the United Nations Office and other International Organisations at Geneva and San Francisco de Quito University Vice Chancellor C. Montùfar visiting CMS surface facilities and underground experimental area with CMS Collaboration L. Sulak and Collaboration Deputy Spokesperson T. Camporesi, throughout accompanied by Head of International Relations F. Pauss.

    CERN Multimedia

    Michael Hoch

    2012-01-01

    30 January 2012 - Ecuadorian Ambassador Gallegos Chiriboga, Permanent Representative to the United Nations Office and other International Organisations at Geneva and San Francisco de Quito University Vice Chancellor C. Montùfar visiting CMS surface facilities and underground experimental area with CMS Collaboration L. Sulak and Collaboration Deputy Spokesperson T. Camporesi, throughout accompanied by Head of International Relations F. Pauss.

  20. 24 February 2012 - Polish Vice-Rectors AGH University of Science and Technology Cracow visiting the ATLAS underground experimental area with Former Collaboration Spokesperson P. Jenni; Vice Rector J. Lis signs a collaboration agreement with A. Unnervik; Adviser T. Kurtyka and A. Siemko accompany the delegation throughout.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    24 February 2012 - Polish Vice-Rectors AGH University of Science and Technology Cracow visiting the ATLAS underground experimental area with Former Collaboration Spokesperson P. Jenni; Vice Rector J. Lis signs a collaboration agreement with A. Unnervik; Adviser T. Kurtyka and A. Siemko accompany the delegation throughout.

  1. 30 August 2013 - Senior Vice Minister for Foreign Affairs in Japan M. Matsuyama signing the guest book with CERN Director-General; visit the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton and visiting the LHC tunnel at Point 1 with former ATLAS Japan national contact physicist T. Kondo. R. Voss and K. Yoshida present throughout.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    30 August 2013 - Senior Vice Minister for Foreign Affairs in Japan M. Matsuyama signing the guest book with CERN Director-General; visit the ATLAS experimental cavern with ATLAS Spokesperson D. Charlton and visiting the LHC tunnel at Point 1 with former ATLAS Japan national contact physicist T. Kondo. R. Voss and K. Yoshida present throughout.

  2. 19 September 2011 - Japan Science and Technology Agency President K. Kitazawa visiting the LHC superconducting magnet test hall with engineer M. Bajko; the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni and Senior Scientist T. Kondo; signing the guest book with Adviser R.Voss and Head of International Relations F. Pauss.

    CERN Multimedia

    2011-01-01

    19 September 2011 - Japan Science and Technology Agency President K. Kitazawa visiting the LHC superconducting magnet test hall with engineer M. Bajko; the ATLAS visitor centre with Collaboration Former Spokesperson P. Jenni and Senior Scientist T. Kondo; signing the guest book with Adviser R.Voss and Head of International Relations F. Pauss.

  3. 24 May 2013 - Rector of the Polish Stanislaw Staszic AGH University of Science and Technology T. Slomka in the LHC tunnel at Point 8 with Senior Polish Staff Member A. Siemko, in LHCb experimental cavern with LHCb Collaboration Spokesperson P. Campana and signing the guest book with Director-General R. Heuer. Adviser for Eastern Europe T. Kurtyka present.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    24 May 2013 - Rector of the Polish Stanislaw Staszic AGH University of Science and Technology T. Slomka in the LHC tunnel at Point 8 with Senior Polish Staff Member A. Siemko, in LHCb experimental cavern with LHCb Collaboration Spokesperson P. Campana and signing the guest book with Director-General R. Heuer. Adviser for Eastern Europe T. Kurtyka present.

  4. 31 Jannuary 2012 - Pakistan COMSATS Executive Director I. E. Qureshi visiting the LHC tunnel at Point 2 with ALICE Collaboration Spokesperson P. Giubellino and International Relations Adviser R. Voss; Exchange of gifts and signature of the guest book with CERN Director-General R. Heuer.

    CERN Multimedia

    Maximilien Brice

    2012-01-01

    31 Jannuary 2012 - Pakistan COMSATS Executive Director I. E. Qureshi visiting the LHC tunnel at Point 2 with ALICE Collaboration Spokesperson P. Giubellino and International Relations Adviser R. Voss; Exchange of gifts and signature of the guest book with CERN Director-General R. Heuer.

  5. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    Science.gov (United States)

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  6. Perceptual restoration of degraded speech is preserved with advancing age.

    Science.gov (United States)

    Saija, Jefta D; Akyürek, Elkan G; Andringa, Tjeerd C; Başkent, Deniz

    2014-02-01

    Cognitive skills, such as processing speed, memory functioning, and the ability to divide attention, are known to diminish with aging. The present study shows that, despite these changes, older adults can successfully compensate for degradations in speech perception. Critically, the older participants of this study were not pre-selected for high performance on cognitive tasks, but only screened for normal hearing. We measured the compensation for speech degradation using phonemic restoration, where intelligibility of degraded speech is enhanced using top-down repair mechanisms. Linguistic knowledge, Gestalt principles of perception, and expectations based on situational and linguistic context are used to effectively fill in the inaudible masked speech portions. A positive compensation effect was previously observed only with young normal hearing people, but not with older hearing-impaired populations, leaving the question whether the lack of compensation was due to aging or due to age-related hearing problems. Older participants in the present study showed poorer intelligibility of degraded speech than the younger group, as expected from previous reports of aging effects. However, in conditions that induce top-down restoration, a robust compensation was observed. Speech perception by the older group was enhanced, and the enhancement effect was similar to that observed with the younger group. This effect was even stronger with slowed-down speech, which gives more time for cognitive processing. Based on previous research, the likely explanations for these observations are that older adults can overcome age-related cognitive deterioration by relying on linguistic skills and vocabulary that they have accumulated over their lifetime. Alternatively, or simultaneously, they may use different cerebral activation patterns or exert more mental effort. This positive finding on top-down restoration skills by the older individuals suggests that new cognitive training methods

  7. Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots

    Directory of Open Access Journals (Sweden)

    Harvey Martin Sussman

    2015-12-01

    Full Text Available Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production.

  8. Do age-related word retrieval difficulties appear (or disappear) in connected speech?

    Science.gov (United States)

    Kavé, Gitit; Goral, Mira

    2017-09-01

    We conducted a comprehensive literature review of studies of word retrieval in connected speech in healthy aging and reviewed relevant aphasia research that could shed light on the aging literature. Four main hypotheses guided the review: (1) Significant retrieval difficulties would lead to reduced output in connected speech. (2) Significant retrieval difficulties would lead to a more limited lexical variety in connected speech. (3) Significant retrieval difficulties would lead to an increase in word substitution errors and in pronoun use as well as to greater dysfluency and hesitation in connected speech. (4) Retrieval difficulties on tests of single-word production would be associated with measures of word retrieval in connected speech. Studies on aging did not confirm these four hypotheses, unlike studies on aphasia that generally did. The review suggests that future research should investigate how context facilitates word production in old age.

  9. A comparison of several computational auditory scene analysis (CASA) techniques for monaural speech segregation.

    Science.gov (United States)

    Zeremdini, Jihen; Ben Messaoud, Mohamed Anouar; Bouzid, Aicha

    2015-09-01

    Humans have the ability to easily separate a composed speech and to form perceptual representations of the constituent sources in an acoustic mixture thanks to their ears. Until recently, researchers attempt to build computer models of high-level functions of the auditory system. The problem of the composed speech segregation is still a very challenging problem for these researchers. In our case, we are interested in approaches that are addressed to the monaural speech segregation. For this purpose, we study in this paper the computational auditory scene analysis (CASA) to segregate speech from monaural mixtures. CASA is the reproduction of the source organization achieved by listeners. It is based on two main stages: segmentation and grouping. In this work, we have presented, and compared several studies that have used CASA for speech separation and recognition.

  10. Detection of target phonemes in spontaneous and read speech

    NARCIS (Netherlands)

    Mehta, G.; Cutler, A.

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ

  11. Who Decides What Is Acceptable Speech on Campus? Why Restricting Free Speech Is Not the Answer.

    Science.gov (United States)

    Ceci, Stephen J; Williams, Wendy M

    2018-05-01

    Recent protests on dozens of campuses have led to the cancellation of controversial talks, and violence has accompanied several of these protests. Psychological science provides an important lens through which to view, understand, and potentially reduce these conflicts. In this article, we frame opposing sides' arguments within a long-standing corpus of psychological research on selective perception, confirmation bias, myside bias, illusion of understanding, blind-spot bias, groupthink/in-group bias, motivated skepticism, and naive realism. These concepts inform dueling claims: (a) the protestors' violence was justified by a higher moral responsibility to prevent marginalized groups from being victimized by hate speech, versus (b) the students' right to hear speakers was infringed upon. Psychological science cannot, however, be the sole arbiter of these campus debates; legal and philosophical considerations are also relevant. Thus, we augment psychological science with insights from these literatures to shed light on complexities associated with positions supporting free speech and those protesting hate speech. We conclude with a set of principles, most supported by empirical research, to inform university policies and help ensure vigorous freedom of expression within the context of an inclusive, diverse community.

  12. Prevalence of speech and language disorders in children in northern Kosovo and Metohija

    Directory of Open Access Journals (Sweden)

    Nešić Blagoje V.

    2011-01-01

    Full Text Available On the territory of the northern part of Kosovo and Metohija (Kosovo municipalities Mitrovica, Zvecan, Leposavic and Zubin Potok a study is conducted in primary schools in order to determine the presence of speech-language disorders in children of early school age. Data were collected from the teachers of the third and fourth grades of primary schools in these municipalities (n = 36, which include a total number of 641 student. The results show that the number of children with speech and language disorders represented in the different municipalities of the research vary (the largest is in Leposavic, the smallest is in Zvecan, then 3/4 the total number of children with speech and language disorders are boys. It is also found that the speech-language disorders usually appear from the very beginning of schooling and that the examined teachers recognize 12 types of speech-language disorders in their students. Teachers recognize dyslexia as the most common speech-language disorder, while dysphasia and distortion are the least common, in the opinion of the teachers. The results show that children are generally accepted by their peers, but only during schooling; then, there is a difference in school success between children with speech and language disorders and children without any speech-language disorders. It also found that the teachers' work is generally not affected by the children with speech and language disorders, and that there is generally an intensive cooperation between teachers and parents of children with speech and language disorders. The research and the results on prevalence of speech-language disorders in children in northern Kosovo and Metohija can be considered as an important guidelines in future work.

  13. Noise-robust speech triage.

    Science.gov (United States)

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  14. Effects of hearing loss on speech recognition under distracting conditions and working memory in the elderly

    Directory of Open Access Journals (Sweden)

    Na W

    2017-08-01

    Full Text Available Wondo Na,1 Gibbeum Kim,1 Gungu Kim,1 Woojae Han,2 Jinsook Kim2 1Department of Speech Pathology and Audiology, Graduate School, 2Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea Purpose: The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss.Methods: One hundred subjects aged 65–84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants’ working memory.Results: 1 As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2 As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3 Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss.Conclusion: The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing

  15. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  16. Researcher Story: Stuttering

    Science.gov (United States)

    ... Have a Question In the News Researcher Story: Stuttering In a 2010 movie, The King’s Speech, many ... effects of the disorder. How Do Researchers Study Stuttering? Video of How Do Researchers Study Stuttering? A ...

  17. Researcher Story: Stuttering

    Medline Plus

    Full Text Available ... Have a Question In the News Researcher Story: Stuttering In a 2010 movie, The King’s Speech, many ... effects of the disorder. How Do Researchers Study Stuttering? Video of How Do Researchers Study Stuttering? A ...

  18. Research

    African Journals Online (AJOL)

    ebutamanya

    2015-08-11

    Aug 11, 2015 ... Key words: Erygmophonic speech, perturbation analysis method, ... However, erygmophonic voice shows also higher and extremely variable Error ... which permits unrestricted use, distribution, and reproduction in any ... work is properly cited. .... do not display the perceptual stability characteristic of human.

  19. Experiments on Automatic Recognition of Nonnative Arabic Speech

    Directory of Open Access Journals (Sweden)

    Douglas O'Shaughnessy

    2008-05-01

    Full Text Available The automatic recognition of foreign-accented Arabic speech is a challenging task since it involves a large number of nonnative accents. As well, the nonnative speech data available for training are generally insufficient. Moreover, as compared to other languages, the Arabic language has sparked a relatively small number of research efforts. In this paper, we are concerned with the problem of nonnative speech in a speaker independent, large-vocabulary speech recognition system for modern standard Arabic (MSA. We analyze some major differences at the phonetic level in order to determine which phonemes have a significant part in the recognition performance for both native and nonnative speakers. Special attention is given to specific Arabic phonemes. The performance of an HMM-based Arabic speech recognition system is analyzed with respect to speaker gender and its native origin. The WestPoint modern standard Arabic database from the language data consortium (LDC and the hidden Markov Model Toolkit (HTK are used throughout all experiments. Our study shows that the best performance in the overall phoneme recognition is obtained when nonnative speakers are involved in both training and testing phases. This is not the case when a language model and phonetic lattice networks are incorporated in the system. At the phonetic level, the results show that female nonnative speakers perform better than nonnative male speakers, and that emphatic phonemes yield a significant decrease in performance when they are uttered by both male and female nonnative speakers.

  20. Experiments on Automatic Recognition of Nonnative Arabic Speech

    Directory of Open Access Journals (Sweden)

    Selouani Sid-Ahmed

    2008-01-01

    Full Text Available The automatic recognition of foreign-accented Arabic speech is a challenging task since it involves a large number of nonnative accents. As well, the nonnative speech data available for training are generally insufficient. Moreover, as compared to other languages, the Arabic language has sparked a relatively small number of research efforts. In this paper, we are concerned with the problem of nonnative speech in a speaker independent, large-vocabulary speech recognition system for modern standard Arabic (MSA. We analyze some major differences at the phonetic level in order to determine which phonemes have a significant part in the recognition performance for both native and nonnative speakers. Special attention is given to specific Arabic phonemes. The performance of an HMM-based Arabic speech recognition system is analyzed with respect to speaker gender and its native origin. The WestPoint modern standard Arabic database from the language data consortium (LDC and the hidden Markov Model Toolkit (HTK are used throughout all experiments. Our study shows that the best performance in the overall phoneme recognition is obtained when nonnative speakers are involved in both training and testing phases. This is not the case when a language model and phonetic lattice networks are incorporated in the system. At the phonetic level, the results show that female nonnative speakers perform better than nonnative male speakers, and that emphatic phonemes yield a significant decrease in performance when they are uttered by both male and female nonnative speakers.

  1. Speech Respiratory Measures in Spastic Cerebral Palsied and Normal Children

    Directory of Open Access Journals (Sweden)

    Hashem Shemshadi

    2007-10-01

    Full Text Available Objective: Research is designed to determine speech respiratory measures in spastic cerebral palsied children versus normal ones, to be used as an applicable tool in speech therapy plans.  Materials & Methods: Via a comparative cross-sectional study (case–control, and through a directive goal oriented sampling in case and convenience approach for controls twenty spastic cerebral palsied and twenty control ones with age (5-12 years old and sex (F=20, M=20 were matched and identified. All possible inclusion and exclusion criteria were considered by thorough past medical, clinical and para clinical such as chest X-ray and Complete Blood Counts reviews to rule out any possible pulmonary and/or systemic disorders. Their speech respiratory indices were determined by Respirometer (ST 1-dysphonia, made and normalized by Glasgow University. Obtained data were analyzed by independent T test. Results: There were significant differences between cases and control groups for "mean tidal volume", "phonatory volume" and "vital capacity" at a=0/05 values and these values in patients were less (34% than normal children (P<0/001. Conclusion: Measures obtained are highly crucial for speech therapist in any speech therapy primary rehabilitative plans for spactic cerebral palsied children.

  2. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    Directory of Open Access Journals (Sweden)

    Catherine Middag

    2009-01-01

    Full Text Available It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008 is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  3. Indonesian Text-To-Speech System Using Diphone Concatenative Synthesis

    Directory of Open Access Journals (Sweden)

    Sutarman

    2015-02-01

    Full Text Available In this paper, we describe the design and develop a database of Indonesian diphone synthesis using speech segment of recorded voice to be converted from text to speech and save it as audio file like WAV or MP3. In designing and develop a database of Indonesian diphone there are several steps to follow; First, developed Diphone database includes: create a list of sample of words consisting of diphones organized by prioritizing looking diphone located in the middle of a word if not at the beginning or end; recording the samples of words by segmentation. ;create diphones made with a tool Diphone Studio 1.3. Second, develop system using Microsoft Visual Delphi 6.0, includes: the conversion system from the input of numbers, acronyms, words, and sentences into representations diphone. There are two kinds of conversion (process alleged in analyzing the Indonesian text-to-speech system. One is to convert the text to be sounded to phonem and two, to convert the phonem to speech. Method used in this research is called Diphone Concatenative synthesis, in which recorded sound segments are collected. Every segment consists of a diphone (2 phonems. This synthesizer may produce voice with high level of naturalness. The Indonesian Text to Speech system can differentiate special phonemes like in ‘Beda’ and ‘Bedak’ but sample of other spesific words is necessary to put into the system. This Indonesia TTS system can handle texts with abbreviation, there is the facility to add such words.

  4. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  5. Relationship between individual differences in speech processing and cognitive functions.

    Science.gov (United States)

    Ou, Jinghua; Law, Sam-Po; Fung, Roxana

    2015-12-01

    A growing body of research has suggested that cognitive abilities may play a role in individual differences in speech processing. The present study took advantage of a widespread linguistic phenomenon of sound change to systematically assess the relationships between speech processing and various components of attention and working memory in the auditory and visual modalities among typically developed Cantonese-speaking individuals. The individual variations in speech processing are captured in an ongoing sound change-tone merging in Hong Kong Cantonese, in which typically developed native speakers are reported to lose the distinctions between some tonal contrasts in perception and/or production. Three groups of participants were recruited, with a first group of good perception and production, a second group of good perception but poor production, and a third group of good production but poor perception. Our findings revealed that modality-independent abilities of attentional switching/control and working memory might contribute to individual differences in patterns of speech perception and production as well as discrimination latencies among typically developed speakers. The findings not only have the potential to generalize to speech processing in other languages, but also broaden our understanding of the omnipresent phenomenon of language change in all languages.

  6. Problems in Translating Figures of Speech: A Review of Persian Translations of Harry Potter Series

    Directory of Open Access Journals (Sweden)

    Fatemeh Masroor

    2016-12-01

    Full Text Available Due to the important role of figures of speech in prose, the present research tried to investigate the figures of speech in the novel, Harry Potter Series, and their Persian translations. The main goal of this research was to investigate the translators’ problems in translating figures of speech from English into Persian. To achieve this goal, the collected data were analyzed and compared with their Persian equivalents. Then, the theories of Newmark (1988 & 2001, Larson (1998, and Nolan (2005 were used in order to find the applied strategies for rendering the figures of speech by the translators. After identifying the applied translation strategies, the descriptive and inferential analyses were applied to answer the research question and test its related hypothesis. The results confirmed that the most common pitfalls in translating figures of speech from English into Persian based on Nolan (2005 were, not identifying of figures of speech, their related meanings and translating them literally. Overall, the research findings rejected the null hypothesis. The findings of present research can be useful for translators, especially beginners. They can be aware of the existing problems in translating figures of speech, so they can avoid committing the same mistakes in their works.

  7. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    Science.gov (United States)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  8. Variable Span Filters for Speech Enhancement

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Benesty, Jacob; Christensen, Mads Græsbøll

    2016-01-01

    In this work, we consider enhancement of multichannel speech recordings. Linear filtering and subspace approaches have been considered previously for solving the problem. The current linear filtering methods, although many variants exist, have limited control of noise reduction and speech...

  9. Quick Statistics about Voice, Speech, and Language

    Science.gov (United States)

    ... here Home » Health Info » Statistics and Epidemiology Quick Statistics About Voice, Speech, Language Voice, Speech, Language, and ... no 205. Hyattsville, MD: National Center for Health Statistics. 2015. Hoffman HJ, Li C-M, Losonczy K, ...

  10. A NOVEL APPROACH TO STUTTERED SPEECH CORRECTION

    Directory of Open Access Journals (Sweden)

    Alim Sabur Ajibola

    2016-06-01

    Full Text Available Stuttered speech is a dysfluency rich speech, more prevalent in males than females. It has been associated with insufficient air pressure or poor articulation, even though the root causes are more complex. The primary features include prolonged speech and repetitive speech, while some of its secondary features include, anxiety, fear, and shame. This study used LPC analysis and synthesis algorithms to reconstruct the stuttered speech. The results were evaluated using cepstral distance, Itakura-Saito distance, mean square error, and likelihood ratio. These measures implied perfect speech reconstruction quality. ASR was used for further testing, and the results showed that all the reconstructed speech samples were perfectly recognized while only three samples of the original speech were perfectly recognized.

  11. Developmental language and speech disability.

    Science.gov (United States)

    Spiel, G; Brunner, E; Allmayer, B; Pletz, A

    2001-09-01

    Speech disabilities (articulation deficits) and language disorders--expressive (vocabulary) receptive (language comprehension) are not uncommon in children. An overview of these along with a global description of the impairment of communication as well as clinical characteristics of language developmental disorders are presented in this article. The diagnostic tables, which are applied in the European and Anglo-American speech areas, ICD-10 and DSM-IV, have been explained and compared. Because of their strengths and weaknesses an alternative classification of language and speech developmental disorders is proposed, which allows a differentiation between expressive and receptive language capabilities with regard to the semantic and the morphological/syntax domains. Prevalence and comorbidity rates, psychosocial influences, biological factors and the biological social interaction have been discussed. The necessity of the use of standardized examinations is emphasised. General logopaedic treatment paradigms, specific therapy concepts and an overview of prognosis have been described.

  12. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  13. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  14. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  15. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  16. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  17. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  18. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  19. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  20. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.