WorldWideScience

Sample records for tissues re-establish speech

  1. Speech intelligibility after gingivectomy of excess palatal tissue

    Directory of Open Access Journals (Sweden)

    Aruna Balasundaram

    2014-01-01

    Full Text Available To appreciate any enhancement in speech following gingivectomy of enlarged anterior palatal gingiva. Periodontal literature has documented various conditions, pathophysiology, and treatment modalities of gingival enlargement. Relationship between gingival maladies and speech alteration has received scant attention. This case report describes on altered speech pattern enhancement secondary to the gingivectomy procedure. A systemically healthy 24-year- female patient reported with bilateral anterior gingival enlargement who was provisionally diagnosed as "gingival abscess with inflammatory enlargement" in relation to palatal aspect of the right maxillary canine to left maxillary canine. Bilateral gingivectomy procedure was performed by external bevel incision in relation to anterior palatal gingiva and a large wedge of epithelium and connective tissue was removed. Patient and her close acquaintances noticed a great improvement in her pronunciation and enunciation of sounds like "t", "d", "n", "l", "th", following removal of excess gingival palatal tissue and was also appreciated with visual analog scale score. Exploration of linguistic research documented the significance of tongue-palate contact during speech. Any excess gingival tissue in palatal region brings about disruption in speech by altering tongue-palate contact. Periodontal surgery like gingivectomy may improve disrupted phonetics. Excess gingival palatal tissue impedes on tongue-palate contact and interferes speech. Pronunciation of consonants like "t", "d", "n", "l", "th", are altered with anterior enlarged palatal gingiva. Excision of the enlarged palatal tissue results in improvement of speech.

  2. Speech pattern improvement following gingivectomy of excess palatal tissue.

    Science.gov (United States)

    Holtzclaw, Dan; Toscano, Nicholas

    2008-10-01

    Speech disruption secondary to excessive gingival tissue has received scant attention in periodontal literature. Although a few articles have addressed the causes of this condition, documentation and scientific explanation of treatment outcomes are virtually non-existent. This case report describes speech pattern improvements secondary to periodontal surgery and provides a concise review of linguistic and phonetic literature pertinent to the case. A 21-year-old white female with a history of gingival abscesses secondary to excessive palatal tissue presented for treatment. Bilateral gingivectomies of palatal tissues were performed with inverse bevel incisions extending distally from teeth #5 and #12 to the maxillary tuberosities, and large wedges of epithelium/connective tissue were excised. Within the first month of the surgery, the patient noted "changes in the manner in which her tongue contacted the roof of her mouth" and "changes in her speech." Further anecdotal investigation revealed the patient's enunciation of sounds such as "s," "sh," and "k" was greatly improved following the gingivectomy procedure. Palatometric research clearly demonstrates that the tongue has intimate contact with the lateral aspects of the posterior palate during speech. Gingival excess in this and other palatal locations has the potential to alter linguopalatal contact patterns and disrupt normal speech patterns. Surgical correction of this condition via excisional procedures may improve linguopalatal contact patterns which, in turn, may lead to improved patient speech.

  3. Establishment, maintenance, and re-establishment of the safe and efficient steady-following state

    International Nuclear Information System (INIS)

    Pan Deng; Zheng Ying-Ping

    2015-01-01

    We present an integrated mathematical model of vehicle-following control for the establishment, maintenance, and re-establishment of the previous or new safe and efficient steady-following state. The hyperbolic functions are introduced to establish the corresponding mathematical models, which can describe the behavioral adjustment of the following vehicle steered by a well-experienced driver under complex vehicle following situations. According to the proposed mathematical models, the control laws of the following vehicle adjusting its own behavior can be calculated for its moving in safety, efficiency, and smoothness (comfort). Simulation results show that the safe and efficient steady-following state can be well established, maintained, and re-established by its own smooth (comfortable) behavioral adjustment with the synchronous control of the following vehicle’s velocity, acceleration, and the actual following distance. (paper)

  4. Telehealth Delivery of Rapid Syllable Transitions (ReST) Treatment for Childhood Apraxia of Speech

    Science.gov (United States)

    Thomas, Donna C.; McCabe, Patricia; Ballard, Kirrie J.; Lincoln, Michelle

    2016-01-01

    Background: Rapid Syllable Transitions (ReST) treatment uses pseudo-word targets with varying lexical stress to target simultaneously articulation, prosodic accuracy and coarticulatory transitions in childhood apraxia of speech (CAS). The treatment is efficacious for the acquisition of imitated pseudo-words, and generalization of skill to…

  5. The effects of vascularized tissue transfer on re-irradiation

    International Nuclear Information System (INIS)

    Narayan, K.; Ashton, M.W.; Taylor, G.I.

    1996-01-01

    Purpose: Nowadays, radical re-irradiation of locally recurrent squamous cell carcinoma is being increasingly tried. The process usually involves some form of surgical excision and vascularized tissue transfer followed by re-irradiation. The aim of this study was to examine the extent of protection from the effects of re-irradiation provided by vascularized tissue transfer. Methods and Materials: One hundred Sprague Dawley rats had their left thighs irradiated to a total dose of 72Gy in 8 fractions, one fraction per day, 5 days per week. The rats were then divided into two groups: At 4 months, one half of the rats had 50% of their quadriceps musculature excised and replaced with a vascularized non-irradiated rectus abdominous myocutaneous flap. The other group served as the control. Six months following the initial radiotherapy all rats were then re-irradiated with either 75 or 90% of the original dose. Incidence of necrosis and the extent of necrosis was measured. Microvasculature of control, transplanted muscle and recipient site was studied by micro-corrosion cast technique and histology of cast specimen. tissues were sampled at pre-irradiation and at 2, 6 and 12 months post re-irradiation. Microvascular surface area was measured from the histology of cast specimen. Results: Necrosis in the control group was clinically evident at 6 weeks post re irradiation and by 10 months all rats developed necrosis. Forty per cent of the thigh that received 75% of the original dose on re-irradiation did not develop any necrosis by 13 months. Other groups developed necrosis to variable extents, however a rim of tissue around the graft always survived. The average thickness of surviving tissue was 9mm. (range being 4-25 mm). None of the transferred flap nor re-irradiated recipient quadriceps developed necrosis. Conclusion: 1. Transplanted rectus abdominus myocutaneous flap and undisturbed muscle have similar radiation tolerance. 2. Vascularized myocutaneous flap offers

  6. Lamb's Chapel Revisited: A Mixed Message on Establishment of Religion, Forum and Free Speech.

    Science.gov (United States)

    Mawdsley, Ralph D.

    1995-01-01

    The Supreme Court in "Lamb's Chapel" unanimously reversed federal district and court of appeals decisions that had upheld school district rules prohibiting use of school district property "by any group for religious purposes." Discusses three issues within the context of religious speech: establishment of religion, free speech,…

  7. A case series of re-establishment of neuromuscular block with rocuronium after sugammadex reversal.

    Science.gov (United States)

    Iwasaki, Hajime; Sasakawa, Tomoki; Takahoko, Kenichi; Takagi, Shunichi; Nakatsuka, Hideki; Suzuki, Takahiro; Iwasaki, Hiroshi

    2016-06-01

    We report the use of rocuronium to re-establish neuromuscular block after reversal with sugammadex. The aim of this study was to investigate the relationship between the dose of rocuronium needed to re-establish neuromuscular block and the time interval between sugammadex administration and re-administration of rocuronium. Patients who required re-establishment of neuromuscular block within 12 h after the reversal of rocuronium-induced neuromuscular block with sugammadex were included. After inducing general anesthesia and placing the neuromuscular monitor, the protocol to re-establish neuromuscular block was as follows. An initial rocuronium dose of 0.6 mg/kg was followed by additional 0.3 mg/kg doses every 2 min until train-of-four responses were abolished. A total of 11 patients were enrolled in this study. Intervals between sugammadex and second rocuronium were 12-465 min. Total dose of rocuronium needed to re-establish neuromuscular block was 0.6-1.2 mg/kg. 0.6 mg/kg rocuronium re-established neuromuscular block in all patients who received initial sugammadex more than 3 h previously. However, when the interval between sugammadex and second rocuronium was less than 2 h, more than 0.6 mg/kg rocuronium was necessary to re-establish neuromuscular block.

  8. 75 FR 983 - Notice of Re-Establishment of the National Petroleum Council

    Science.gov (United States)

    2010-01-07

    ... DEPARTMENT OF ENERGY Notice of Re-Establishment of the National Petroleum Council AGENCY: Office of Fossil Energy, Department of Energy. ACTION: Notice of Re-Establishment of the National Petroleum... Secretariat, General Services Administration, notice is hereby given that the National Petroleum Council has...

  9. 76 FR 45402 - Advisory Committee; Medical Imaging Drugs Advisory Committee; Re-Establishment

    Science.gov (United States)

    2011-07-29

    .... FDA-2010-N-0002] Advisory Committee; Medical Imaging Drugs Advisory Committee; Re- Establishment... (FDA) is announcing the re- establishment of the Medical Imaging Drugs Advisory Committee in FDA's Center for Drug Evaluation and Research. This rule amends the current language for the Medical Imaging...

  10. [The speech therapist in geriatrics: caregiver, technician-researcher, or both?].

    Science.gov (United States)

    Orellana, Blandine

    2015-01-01

    Geriatric care mostly consists not in curingthe patient, but supportingthem to the end of their life, giving meaning to care procedures and actions through speech, touch or look and maintaining a connection.The helping relationship is omnipresent and the role of the speech therapist is therefore essential in helping to maintain or re-establish elderly patients' abilityto communicate. However, todaythis role is struggling to define itself between that of the technician-researcher and that of caregiver.

  11. Rapid Syllable Transitions (ReST) treatment for Childhood Apraxia of Speech: the effect of lower dose-frequency.

    Science.gov (United States)

    Thomas, Donna C; McCabe, Patricia; Ballard, Kirrie J

    2014-01-01

    This study investigated the effectiveness of twice-weekly Rapid Syllable Transitions (ReST) treatment for Childhood Apraxia of Speech (CAS). ReST is an effective treatment at a frequency of four sessions a week for three consecutive weeks. In this study we used a multiple-baselines across participants design to examine treatment efficacy for four children with CAS, aged four to eight years, who received ReST treatment twice a week for six weeks. The children's ability to acquire new skills, generalize these skills to untreated items and maintain the skills after treatment was examined. All four children improved their production of the target items. Two of the four children generalized the treatment effects to similar untreated pseudo words and all children generalized to untreated real words. During the maintenance phase, all four participants maintained their skills to four months post-treatment, with a stable rather than rising profile. This study shows that ReST treatment delivered twice-weekly results in significant retention of treatment effects to four months post-treatment and generalization to untrained but related speech behaviors. Compared to ReST therapy four times per week, the twice-weekly frequency produces similar treatment gains but no ongoing improvement after the cessation of treatment. This implies that there may be a small but significant benefit of four times weekly therapy compared with twice-weekly ReST therapy. Readers will be able to define dose-frequency, and describe how this relates to overall intervention intensity. Readers will be able to explain the acquisition, generalization and maintenance effects in the study and describe how these compare to higher dose frequency treatments. Readers will recognize that the current findings give preliminary support for high dose-frequency CAS treatment. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. The importance of establishing an international network of tissue banks and regional tissue processing centers.

    Science.gov (United States)

    Morales Pedraza, Jorge

    2014-03-01

    During the past four decades, many tissue banks have been established across the world with the aim of supplying sterilized tissues for clinical use and research purposes. Between 1972 and 2005, the International Atomic Energy Agency supported the establishment of more than sixty of these tissue banks in Latin America and the Caribbean, Asia and the Pacific, Africa and Eastern Europe; promoted the use of the ionizing radiation technique for the sterilization of the processed tissues; and encouraged cooperation between the established tissue banks during the implementation of its program on radiation and tissue banking at national, regional and international levels. Taking into account that several of the established tissue banks have gained a rich experience in the procurement, processing, sterilization, storage, and medical use of sterilized tissues, it is time now to strengthen further international and regional cooperation among interested tissue banks located in different countries. The purpose of this cooperation is to share the experience gained by these banks in the procurement, processing, sterilization, storage, and used of different types of tissues in certain medical treatments and research activities. This could be done through the establishment of a network of tissue banks and a limited number of regional tissue processing centers in different regions of the world.

  13. Estimation of the time since death--reconsidering the re-establishment of rigor mortis.

    Science.gov (United States)

    Anders, Sven; Kunz, Michaela; Gehl, Axel; Sehner, Susanne; Raupach, Tobias; Beck-Bornholdt, Hans-Peter

    2013-01-01

    In forensic medicine, there is an undefined data background for the phenomenon of re-establishment of rigor mortis after mechanical loosening, a method used in establishing time since death in forensic casework that is thought to occur up to 8 h post-mortem. Nevertheless, the method is widely described in textbooks on forensic medicine. We examined 314 joints (elbow and knee) of 79 deceased at defined time points up to 21 h post-mortem (hpm). Data were analysed using a random intercept model. Here, we show that re-establishment occurred in 38.5% of joints at 7.5 to 19 hpm. Therefore, the maximum time span for the re-establishment of rigor mortis appears to be 2.5-fold longer than thought so far. These findings have major impact on the estimation of time since death in forensic casework.

  14. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Heracleous Panikos

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  15. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    Directory of Open Access Journals (Sweden)

    Hiroshi Saruwatari

    2007-01-01

    Full Text Available We present the use of stethoscope and silicon NAM (nonaudible murmur microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible speech, but also very quietly uttered speech (nonaudible murmur. As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc. for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a 93.9% word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  16. Visualizing structures of speech expressiveness

    DEFF Research Database (Denmark)

    Herbelin, Bruno; Jensen, Karl Kristoffer; Graugaard, Lars

    2008-01-01

    Speech is both beautiful and informative. In this work, a conceptual study of the speech, through investigation of the tower of Babel, the archetypal phonemes, and a study of the reasons of uses of language is undertaken in order to create an artistic work investigating the nature of speech. The ....... The artwork is presented at the Re:New festival in May 2008....

  17. The re-establishment of hypersensitive cells in the crypts of irradiated mouse intestine

    International Nuclear Information System (INIS)

    Ijiri, K.; Potten, C.S.

    1984-01-01

    Two doses of γ-radiation separated by various time intervals have been used to investigate when after irradiation the cell population susceptible to acute cell death is re-established. Dead cells were scored 3 or 6 h after the second dose. Within 1-2 days of small doses (0.5 Gy) the sensitive cells, recognized histologically as apoptotic cells, are re-established at the base of the crypt (around cell position 6). After higher doses (9.0 Gy) they are not re-established until about the fourth day after irradiation. Even in the enlarged regenerating crypts the sensitive cells are found at the same position at the crypt base. It has been estimated that the crypt contains five or six cells that are susceptible to low doses (0.5 Gy) (hypersensitive cells) and up to a total of only seven or eight susceptible cells that can be induced by any dose to enter the sequence of changes implicit in apoptosis. Between 4 and 10 days after an intitial irradiation of 9.0 Gy the total number of susceptible cells increased from seven to eight to about 10 to 13 per crypt. (author)

  18. Innovative water withdrawal system re-establishes fish migration runs

    International Nuclear Information System (INIS)

    Anon.

    2008-01-01

    This article described a unique water withdrawal and fish bypass structure that is under construction in Oregon to re-establish anadromous fish runs and to improve water quality downstream of the Round Butte dam. Portland General Electric and the Confederated Tribes of the Warm Springs Reservation, which co-own the dam, have committed to re-establish fish runs in response to concerns over the declining numbers of salmon and trout in the region. Water intakes are routinely added at hydroelectric facilities to protect native fish in compliance with the Federal Energy Regulatory Commission and the Clean Water Act. The Round Butte Hydroelectric project had a complex set of challenges whereby surface-current directions had to be changed to help migrating salmon swim easily into a fish handling area and create a fish collection system. CH2M HILL designed the system which consists of a large floating structure, an access bridge, a large vertical conduit and a base structure resting on the lake bed. Instead of using 2D CAD file methods, CH2M HILL decided to take advantage of 3D models to visualize the complex geometry of these structures. The 3D models were used to help designers and consultants understand the issues, resolve conflicts and design solutions. The objective is to have the system operating by the 2009 migrating season. 1 ref., 4 figs

  19. Re-establishment of rigor mortis: evidence for a considerably longer post-mortem time span.

    Science.gov (United States)

    Crostack, Chiara; Sehner, Susanne; Raupach, Tobias; Anders, Sven

    2017-07-01

    Re-establishment of rigor mortis following mechanical loosening is used as part of the complex method for the forensic estimation of the time since death in human bodies and has formerly been reported to occur up to 8-12 h post-mortem (hpm). We recently described our observation of the phenomenon in up to 19 hpm in cases with in-hospital death. Due to the case selection (preceding illness, immobilisation), transfer of these results to forensic cases might be limited. We therefore examined 67 out-of-hospital cases of sudden death with known time points of death. Re-establishment of rigor mortis was positive in 52.2% of cases and was observed up to 20 hpm. In contrast to the current doctrine that a recurrence of rigor mortis is always of a lesser degree than its first manifestation in a given patient, muscular rigidity at re-establishment equalled or even exceeded the degree observed before dissolving in 21 joints. Furthermore, this is the first study to describe that the phenomenon appears to be independent of body or ambient temperature.

  20. CAR2 - Czech Database of Car Speech

    Directory of Open Access Journals (Sweden)

    P. Sovka

    1999-12-01

    Full Text Available This paper presents new Czech language two-channel (stereo speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  1. CAR2 - Czech Database of Car Speech

    OpenAIRE

    Pollak, P.; Vopicka, J.; Hanzl, V.; Sovka, Pavel

    1999-01-01

    This paper presents new Czech language two-channel (stereo) speech database recorded in car environment. The created database was designed for experiments with speech enhancement for communication purposes and for the study and the design of a robust speech recognition systems. Tools for automated phoneme labelling based on Baum-Welch re-estimation were realised. The noise analysis of the car background environment was done.

  2. Loss and re-establishment of desiccation tolerance in the germinated seeds of Sesbania virgata (Cav. (Pers.

    Directory of Open Access Journals (Sweden)

    Tathiana Elisa Masetto

    2015-08-01

    Full Text Available This research aimed to investigate the cellular alterations during the loss and re-establishment of desiccation tolerance (DT in germinated Sesbania virgata seeds. The loss of DT was characterized in germinated seeds with increasing radicle lengths (1, 2, 3, 4 and 5 mm when subjected to dehydration in silica gel, followed by rehydration. To re-establish DT, the germinated seeds were incubated for 72h in polyethylene glycol (PEG, -2.04 MPa with or without ABA (100 μM before dehydration in silica gel. Cell viability was assessed by seedling survival, and DNA integrity was evaluated by gel electrophoresis. Seeds with 1 mm radicle length survived dehydration to the original moisture content (MC of the dry seed (approximately 10%. PEG treatment was able to re-establish DT, at least partially, with 2, 3 and 4 mm but not in 5 mm radicle lengths. Germinated seeds treated with PEG+ABA performed better than those treated only with PEG, and DT was re-established even in germinated seeds with a 5 mm radicle length. Among the PEG-treated germinated seeds dehydrated to 10% MC, DNA integrity was maintained only in those with a 1 mm radicle length.

  3. How to re-establish Openness as default? Towards a global joint initiative

    NARCIS (Netherlands)

    Stracke, Christian M.

    2017-01-01

    Stracke, C. M. (2016, 14 April). How to re-establish Openness as default? Towards a global joint initiative. Results from the Action Lab at the Open Education Global Conference 2016, Krakow, Poland. More about the Action Lab:

  4. Treatment of a large periradicular defect using guided tissue regeneration: A case report of 2 years follow-up and surgical re-entry

    Directory of Open Access Journals (Sweden)

    Abhijit Ningappa Gurav

    2015-01-01

    Full Text Available Periradicular (PR bone defects are common sequelae of chronic endodontic lesions. Sometimes, conventional root canal therapy is not adequate for complete resolution of the lesion. PR surgeries may be warranted in such selected cases. PR surgery provides a ready access for the removal of pathologic tissue from the periapical region, assisting in healing. Recently, the regeneration of the destroyed PR tissues has gained more attention rather than repair. In order to promote regeneration after apical surgery, the principle of guided tissue regeneration (GTR has proved to be useful. This case presents the management of a large PR lesion in a 42-year-old male subject. The PR lesion associated with 21, 11 and 12 was treated using GTR membrane, fixated with titanium minipins. The case was followed up for 2 years radiographically, and a surgical re-entry confirmed the re-establishment of the lost labial plate. Thus, the principle of GTR may immensely improve the clinical outcome and prognosis of an endodontically involved tooth with a large PR defect.

  5. Treatment of a large periradicular defect using guided tissue regeneration: A case report of 2 years follow-up and surgical re-entry

    Science.gov (United States)

    Gurav, Abhijit Ningappa; Shete, Abhijeet Rajendra; Naiktari, Ritam

    2015-01-01

    Periradicular (PR) bone defects are common sequelae of chronic endodontic lesions. Sometimes, conventional root canal therapy is not adequate for complete resolution of the lesion. PR surgeries may be warranted in such selected cases. PR surgery provides a ready access for the removal of pathologic tissue from the periapical region, assisting in healing. Recently, the regeneration of the destroyed PR tissues has gained more attention rather than repair. In order to promote regeneration after apical surgery, the principle of guided tissue regeneration (GTR) has proved to be useful. This case presents the management of a large PR lesion in a 42-year-old male subject. The PR lesion associated with 21, 11 and 12 was treated using GTR membrane, fixated with titanium minipins. The case was followed up for 2 years radiographically, and a surgical re-entry confirmed the re-establishment of the lost labial plate. Thus, the principle of GTR may immensely improve the clinical outcome and prognosis of an endodontically involved tooth with a large PR defect. PMID:26941526

  6. Audiovisual Temporal Recalibration for Speech in Synchrony Perception and Speech Identification

    Science.gov (United States)

    Asakawa, Kaori; Tanaka, Akihiro; Imai, Hisato

    We investigated whether audiovisual synchrony perception for speech could change after observation of the audiovisual temporal mismatch. Previous studies have revealed that audiovisual synchrony perception is re-calibrated after exposure to a constant timing difference between auditory and visual signals in non-speech. In the present study, we examined whether this audiovisual temporal recalibration occurs at the perceptual level even for speech (monosyllables). In Experiment 1, participants performed an audiovisual simultaneity judgment task (i.e., a direct measurement of the audiovisual synchrony perception) in terms of the speech signal after observation of the speech stimuli which had a constant audiovisual lag. The results showed that the “simultaneous” responses (i.e., proportion of responses for which participants judged the auditory and visual stimuli to be synchronous) at least partly depended on exposure lag. In Experiment 2, we adopted the McGurk identification task (i.e., an indirect measurement of the audiovisual synchrony perception) to exclude the possibility that this modulation of synchrony perception was solely attributable to the response strategy using stimuli identical to those of Experiment 1. The characteristics of the McGurk effect reported by participants depended on exposure lag. Thus, it was shown that audiovisual synchrony perception for speech could be modulated following exposure to constant lag both in direct and indirect measurement. Our results suggest that temporal recalibration occurs not only in non-speech signals but also in monosyllabic speech at the perceptual level.

  7. Individual differneces in degraded speech perception

    Science.gov (United States)

    Carbonell, Kathy M.

    One of the lasting concerns in audiology is the unexplained individual differences in speech perception performance even for individuals with similar audiograms. One proposal is that there are cognitive/perceptual individual differences underlying this vulnerability and that these differences are present in normal hearing (NH) individuals but do not reveal themselves in studies that use clear speech produced in quiet (because of a ceiling effect). However, previous studies have failed to uncover cognitive/perceptual variables that explain much of the variance in NH performance on more challenging degraded speech tasks. This lack of strong correlations may be due to either examining the wrong measures (e.g., working memory capacity) or to there being no reliable differences in degraded speech performance in NH listeners (i.e., variability in performance is due to measurement noise). The proposed project has 3 aims; the first, is to establish whether there are reliable individual differences in degraded speech performance for NH listeners that are sustained both across degradation types (speech in noise, compressed speech, noise-vocoded speech) and across multiple testing sessions. The second aim is to establish whether there are reliable differences in NH listeners' ability to adapt their phonetic categories based on short-term statistics both across tasks and across sessions; and finally, to determine whether performance on degraded speech perception tasks are correlated with performance on phonetic adaptability tasks, thus establishing a possible explanatory variable for individual differences in speech perception for NH and hearing impaired listeners.

  8. Effects of Re-heating Tissue Samples to Core Body Temperature on High-Velocity Ballistic Projectile-tissue Interactions.

    Science.gov (United States)

    Humphrey, Caitlin; Henneberg, Maciej; Wachsberger, Christian; Maiden, Nicholas; Kumaratilake, Jaliya

    2017-11-01

    Damage produced by high-speed projectiles on organic tissue will depend on the physical properties of the tissues. Conditioning organic tissue samples to human core body temperature (37°C) prior to conducting ballistic experiments enables their behavior to closely mimic that of living tissues. To minimize autolytic changes after death, the tissues are refrigerated soon after their removal from the body and re-heated to 37°C prior to testing. This research investigates whether heating 50-mm-cube samples of porcine liver, kidney, and heart to 37°C for varying durations (maximum 7 h) can affect the penetration response of a high-speed, steel sphere projectile. Longer conditioning times for heart and liver resulted in a slight loss of velocity/energy of the projectile, but the reverse effect occurred for the kidney. Possible reasons for these trends include autolytic changes causing softening (heart and liver) and dehydration causing an increase in density (kidney). © 2017 American Academy of Forensic Sciences.

  9. Research on the re-establishment of the classification criteria of strategic items

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seong Mi; Yang, Seunghyo; Shin, Dong Hoon [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of)

    2014-05-15

    According to these export control laws and regulations, the exporters have to apply the review for classification and export licensing to their own government. In this process, a technical review institute such as Korea Institute of Nuclear Nonproliferation and Control (institute under the NSSC) are referring to Minister's Regulation for the Export and Import of Strategic Goods. In this regulation, there are many criteria to classify the strategic items to be exported. But there are some problems in these criteria. At Typical problem is that classification criteria of Trigger List Items generally is very qualitative and very obscure in contrast with Dual Use Items. So, in most cases, this characteristics of classification criteria of trigger list items have caused much trouble for stakeholders such as government and nuclear related companies. So, there were needs that the classification criteria had to be more correct, obvious and objective. To solve these problems, the past classification cases for technology were re-analyzed and the general criteria were deducted in this study. Previously mentioned, the classification process and criteria were very qualitative and very obscure for the Trigger List Items. So, the re-establishment of the classification criteria was done to solve these problems in this study. Each extracted results were shown in Tables I and II. This re-established criteria are expected to contribute to quantification, disambiguation and objectification of the classification review process. As the future works, we will establish the probability or numerical factor for the extracted criteria through statistical surveys, to make better use of these criteria. And we will push ahead with the NSSC approval to use as the classification guidelines of the trigger list items in review processes.

  10. Establishment of primary keratinocyte culture from horse tissue biopsates

    Directory of Open Access Journals (Sweden)

    Jernej OGOREVC

    2015-12-01

    Full Text Available Primary cell lines established from skin tissue can be used in immunological, proteomic and genomic studies as in vitro skin models. The goal of our study was to establish a primary keratinocyte cell culture from tissue biopsates of two horses. The primary keratinocyte cell culture was obtained by mechanical and enzymatic dissociation and with explant culture method. The result was a heterogeneous primary culture comprised of keratinocytes and fibroblasts. To distinguish epithelial and mesenchymal cells immunofluorescent characterisation was performed, using antibodies against cytokeratin 14 and vimentin. We successfully at attained a primary cell line of keratinocytes, which could potentially be used to study equine skin diseases, as an animal model for human diseases, and for cosmetic and therapeutic product testing.

  11. Tissue repair genes: the TiRe database and its implication for skin wound healing.

    Science.gov (United States)

    Yanai, Hagai; Budovsky, Arie; Tacutu, Robi; Barzilay, Thomer; Abramovich, Amir; Ziesche, Rolf; Fraifeld, Vadim E

    2016-04-19

    Wound healing is an inherent feature of any multicellular organism and recent years have brought about a huge amount of data regarding regular and abnormal tissue repair. Despite the accumulated knowledge, modulation of wound healing is still a major biomedical challenge, especially in advanced ages. In order to collect and systematically organize what we know about the key players in wound healing, we created the TiRe (Tissue Repair) database, an online collection of genes and proteins that were shown to directly affect skin wound healing. To date, TiRe contains 397 entries for four organisms: Mus musculus, Rattus norvegicus, Sus domesticus, and Homo sapiens. Analysis of the TiRe dataset of skin wound healing-associated genes showed that skin wound healing genes are (i) over-conserved among vertebrates, but are under-conserved in invertebrates; (ii) enriched in extracellular and immuno-inflammatory genes; and display (iii) high interconnectivity and connectivity to other proteins. The latter may provide potential therapeutic targets. In addition, a slower or faster skin wound healing is indicative of an aging or longevity phenotype only when assessed in advanced ages, but not in the young. In the long run, we aim for TiRe to be a one-station resource that provides researchers and clinicians with the essential data needed for a better understanding of the mechanisms of wound healing, designing new experiments, and the development of new therapeutic strategies. TiRe is freely available online at http://www.tiredb.org.

  12. Language disorders in young children : when is speech therapy recommended?

    NARCIS (Netherlands)

    Goorhuis-Brouwer, SM; Knijff, WA

    Objective: Analysis of treatment recommendation given by speech therapists. Evaluation of the language abilities in the examined children and re-examination of those abilities after 12 months. Materials and methods: Thirty-four children, aged between 2.0 and 5.3 years, referred to speech therapists

  13. Next Generation Tissue Engineering of Orthopedic Soft Tissue-to-Bone Interfaces

    Science.gov (United States)

    Boys, Alexander J.; McCorry, Mary Clare; Rodeo, Scott; Bonassar, Lawrence J.; Estroff, Lara A.

    2017-01-01

    Soft tissue-to-bone interfaces are complex structures that consist of gradients of extracellular matrix materials, cell phenotypes, and biochemical signals. These interfaces, called entheses for ligaments, tendons, and the meniscus, are crucial to joint function, transferring mechanical loads and stabilizing orthopedic joints. When injuries occur to connected soft tissue, the enthesis must be re-established to restore function, but due to structural complexity, repair has proven challenging. Tissue engineering offers a promising solution for regenerating these tissues. This prospective review discusses methodologies for tissue engineering the enthesis, outlined in three key design inputs: materials processing methods, cellular contributions, and biochemical factors. PMID:29333332

  14. Production of {sup 186}Re and {sup 188}Re, and synthesis of {sup 188}Re-DTPA

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, Kazuyuki; Motoishi, Shoji; Kobayashi, Katsutoshi; Izumo, Mishiroku [Department of Radioisotopes, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Musdja, Muhammad Yanis

    1999-08-01

    Production of radioactive rhenium isotopes {sup 186}Re and {sup 188}Re, and synthesis of {sup 188}Re-DTPA have been studied. For {sup 186}Re, a production method by the {sup 185}Re(n, {gamma}) {sup 186}Re reaction in a reactor has been established. For {sup 188}Re, a production method by the double neuron capture reaction of {sup 186}W, which produces a {sup 188}W/{sup 188}Re generator, has been established. For synthesis of {sup 188}Re-DTPA, the optimum conditions, including pH, the amounts of regents and so on, have been determined. (author)

  15. Sparsity in Linear Predictive Coding of Speech

    DEFF Research Database (Denmark)

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  16. Evaluating the utility of hydrocarbons for Re-Os geochronology : establishing the timing of processes in petroleum ore systems

    Energy Technology Data Exchange (ETDEWEB)

    Selby, D.; Creaser, R.A. [Alberta Univ., Edmonton, AB (Canada). Dept. of Earth and Atmospheric Sciences

    2005-07-01

    Oil from 6 Alberta oil sands deposits were analyzed with a rhenium-osmium (Re-Os) isotope chronometer, an emerging tool for determining valuable age information on the timing of petroleum generation and migration. The tool uses molybdenite and other sulphide minerals to establish the timing and duration of mineralization. However, establishing the timing events of petroleum systems can be problematic because viable sulphides for the Re-Os chronometer are often not available. Therefore, the known presence of Re and Os associated with organic matter in black shale, a common source of hydrocarbons, may suggest that bitumen and petroleum common to petroleum systems may be utilised for Re-Os geochronology. This study evaluated the potential of the Re-Os isotopic system for geochronology and as an isotopic tracer for hydrocarbon systems. The evaluation was based on Re-Os isotopic analyses of bitumen and oil sands. Hydrocarbons formed from migrated oil in both Alberta oil sand deposits and a Paleozoic Mississippi Valley-type lead-zinc deposit contain significant Re and Os contents with high {sup 187}Re/{sup 188}Os and radiogenic {sup 187}Os/{sup 188}Os ratios suitable for geochronology. The oil from the 6 Alberta oil sand deposits yields Re-Os analyses with very high Re/{sup 188}Os ratios, and radiogenic Os isotopic compositions. Regression of the Re-Os data yields a date of 116 {+-} 27 Ma. This date plausibly represents the period of in situ radiogenic growth of {sup 187}Os following hydrocarbon migration and reservoir filling. Therefore, directly dating these processes, and this formation age corresponds with recent burial history models for parts of the Western Canada Sedimentary Basin. The very high initial {sup 187}Os/{sup 188}Os for this regression requires rocks much older than Cretaceous for the hydrocarbon source.

  17. Tissue repair genes: the TiRe database and its implication for skin wound healing

    OpenAIRE

    Yanai, Hagai; Budovsky, Arie; Tacutu, Robi; Barzilay, Thomer; Abramovich, Amir; Ziesche, Rolf; Fraifeld, Vadim E.

    2016-01-01

    Wound healing is an inherent feature of any multicellular organism and recent years have brought about a huge amount of data regarding regular and abnormal tissue repair. Despite the accumulated knowledge, modulation of wound healing is still a major biomedical challenge, especially in advanced ages. In order to collect and systematically organize what we know about the key players in wound healing, we created the TiRe (Tissue Repair) database, an online collection of genes and proteins that ...

  18. The Effectiveness of Clear Speech as a Masker

    Science.gov (United States)

    Calandruccio, Lauren; Van Engen, Kristin; Dhar, Sumitrajit; Bradlow, Ann R.

    2010-01-01

    Purpose: It is established that speaking clearly is an effective means of enhancing intelligibility. Because any signal-processing scheme modeled after known acoustic-phonetic features of clear speech will likely affect both target and competing speech, it is important to understand how speech recognition is affected when a competing speech signal…

  19. Orthographically sensitive treatment for dysprosody in children with childhood apraxia of speech using ReST intervention.

    Science.gov (United States)

    McCabe, Patricia; Macdonald-D'Silva, Anita G; van Rees, Lauren J; Ballard, Kirrie J; Arciuli, Joanne

    2014-04-01

    Impaired prosody is a core diagnostic feature of Childhood Apraxia of Speech (CAS) but there is limited evidence of effective prosodic intervention. This study reports the efficacy of the ReST intervention used in conjunction with bisyllabic pseudo word stimuli containing orthographic cues that are strongly associated with either strong-weak or weak-strong patterns of lexical stress. Using a single case AB design with one follow-up and replication, four children with CAS received treatment of four one-hour sessions per week for three weeks. Sessions contained 100 randomized trials of pseudo word treatment stimuli. Baseline measures were taken of treated and untreated behaviors; retention was measured at one day and four weeks post-treatment. Children's production of lexical stress improved from pre to post-treatment. Treatment effects and maintenance varied among participants. This study provides support for the treatment of prosodic deficits in CAS.

  20. Intra-urinoma Rendezvous Using a Transconduit Approach to Re-establish Ureteric Integrity

    International Nuclear Information System (INIS)

    Anderson, Hugh; Alyas, Faisal; Edwin, Patrick Joseph

    2005-01-01

    Ureteric discontinuity following injury has been traditionally treated surgically. With the advent of improved interventional instrumentation it is possible to stent these lesions percutaneously, retrogradely or failing that using a combined (rendezvous) technique. We describe an intra-urinoma rendezvous procedure combining a percutaneous antegrade-transconduit retrograde technique of stent insertion to successfully re-establish ureteric integrity that was used following the failure of a percutaneous retrograde approach. We illustrate its usefulness as an alternative to surgery

  1. Processing changes when listening to foreign-accented speech

    Directory of Open Access Journals (Sweden)

    Carlos eRomero-Rivas

    2015-03-01

    Full Text Available This study investigates the mechanisms responsible for fast changes in processing foreign-accented speech. Event Related brain Potentials (ERPs were obtained while native speakers of Spanish listened to native and foreign-accented speakers of Spanish. We observed a less positive P200 component for foreign-accented speech relative to native speech comprehension. This suggests that the extraction of spectral information and other important acoustic features was hampered during foreign-accented speech comprehension. However, the amplitude of the N400 component for foreign-accented speech comprehension decreased across the experiment, suggesting the use of a higher level, lexical mechanism. Furthermore, during native speech comprehension, semantic violations in the critical words elicited an N400 effect followed by a late positivity. During foreign-accented speech comprehension, semantic violations only elicited an N400 effect. Overall, our results suggest that, despite a lack of improvement in phonetic discrimination, native listeners experience changes at lexical-semantic levels of processing after brief exposure to foreign-accented speech. Moreover, these results suggest that lexical access, semantic integration and linguistic re-analysis processes are permeable to external factors, such as the accent of the speaker.

  2. Determination of priority areas for the re-establishment of forest cover, based on the use of geotechnologies

    Directory of Open Access Journals (Sweden)

    Nelson Wellausen Dias

    2012-12-01

    Full Text Available The determination of priority areas for the re-establishment of forest cover in watersheds is directly associated to the probability of effective success of restoration processes. However, considering the complexity of the analysis and the large amount of spatial data necessary to accomplish that purpose, state of the art technological tools capable of processing multi-criteria analysis to support decision making are necessary. Thus, the current work developed for an area of 476 km² corresponding to the Una river watershed in the municipal district of Taubaté, SP, used a multi-criteria analysis based on the continuous classification and on Analytical Hierarchy Process (AHP paired comparisons techniques, available in the complete GIS package named SPRING (Georeferenced Information Processing System for generating a map of priority areas for the re-establishment of forest cover in that watershed. Results revealed a large area (26.6% of the entire watershed falling in the “Extreme Priority” class for forest cover re-establishment, what indicates the urgent need of environmental recovery of this basin considering that it is used for Taubaté city water supply. Results from this research support the decision making for resource optimization applied to priority areas in an operational way.

  3. The Re-Establishment of Desiccation Tolerance in Germinated Arabidopsis thaliana Seeds and Its Associated Transcriptome

    NARCIS (Netherlands)

    Maia de Oliveira, J.; Dekkers, S.J.W.; Provart, N.J.; Ligterink, W.; Hilhorst, H.W.M.

    2011-01-01

    The combination of robust physiological models with “omics” studies holds promise for the discovery of genes and pathways linked to how organisms deal with drying. Here we used a transcriptomics approach in combination with an in vivo physiological model of re-establishment of desiccation tolerance

  4. Fight for your breeding right: hierarchy re-establishment predicts aggression in a social queue.

    Science.gov (United States)

    Wong, Marian; Balshine, Sigal

    2011-04-23

    Social aggression is one of the most conspicuous features of animal societies, yet little is known about the causes of individual variation in aggression within social hierarchies. Recent theory suggests that when individuals form queues for breeding, variation in social aggression by non-breeding group members is related to their probability of inheriting breeding status. However, levels of aggression could also vary as a temporary response to changes in the hierarchy, with individuals becoming more aggressive as they ascend in rank, in order to re-establish dominance relationships. Using the group-living fish, Neolamprologus pulcher, we show that subordinates became more aggressive after they ascended in rank. Female ascenders exhibited more rapid increases in aggression than males, and the increased aggression was primarily directed towards group members of adjacent rather than non-adjacent rank, suggesting that social aggression was related to conflict over rank. Elevated aggression by ascenders was not sustained over time, there was no relationship between rank and aggression in stable groups, and aggression given by ascenders was not sex-biased. Together, these results suggest that the need to re-establish dominance relationships following rank ascension is an important determinant of variation in aggression in animal societies.

  5. Child speech, language and communication need re-examined in a public health context: a new direction for the speech and language therapy profession.

    Science.gov (United States)

    Law, James; Reilly, Sheena; Snow, Pamela C

    2013-01-01

    Historically speech and language therapy services for children have been framed within a rehabilitative framework with explicit assumptions made about providing therapy to individuals. While this is clearly important in many cases, we argue that this model needs revisiting for a number of reasons. First, our understanding of the nature of disability, and therefore communication disabilities, has changed over the past century. Second, there is an increasing understanding of the impact that the social gradient has on early communication difficulties. Finally, understanding how these factors interact with one other and have an impact across the life course remains poorly understood. To describe the public health paradigm and explore its implications for speech and language therapy with children. We test the application of public health methodologies to speech and language therapy services by looking at four dimensions of service delivery: (1) the uptake of services and whether those children who need services receive them; (2) the development of universal prevention services in relation to social disadvantage; (3) the risk of over-interpreting co-morbidity from clinical samples; and (4) the overlap between communicative competence and mental health. It is concluded that there is a strong case for speech and language therapy services to be reconceptualized to respond to the needs of the whole population and according to socially determined needs, focusing on primary prevention. This is not to disregard individual need, but to highlight the needs of the population as a whole. Although the socio-political context is different between countries, we maintain that this is relevant wherever speech and language therapists have a responsibility for covering whole populations. Finally, we recommend that speech and language therapy services be conceptualized within the framework laid down in The Ottawa Charter for Health Promotion. © 2013 Royal College of Speech and Language

  6. Speech masking and cancelling and voice obscuration

    Science.gov (United States)

    Holzrichter, John F.

    2013-09-10

    A non-acoustic sensor is used to measure a user's speech and then broadcasts an obscuring acoustic signal diminishing the user's vocal acoustic output intensity and/or distorting the voice sounds making them unintelligible to persons nearby. The non-acoustic sensor is positioned proximate or contacting a user's neck or head skin tissue for sensing speech production information.

  7. Speech Intelligibility Advantages using an Acoustic Beamformer Display

    Science.gov (United States)

    Begault, Durand R.; Sunder, Kaushik; Godfroy, Martine; Otto, Peter

    2015-01-01

    A speech intelligibility test conforming to the Modified Rhyme Test of ANSI S3.2 "Method for Measuring the Intelligibility of Speech Over Communication Systems" was conducted using a prototype 12-channel acoustic beamformer system. The target speech material (signal) was identified against speech babble (noise), with calculated signal-noise ratios of 0, 5 and 10 dB. The signal was delivered at a fixed beam orientation of 135 deg (re 90 deg as the frontal direction of the array) and the noise at 135 deg (co-located) and 0 deg (separated). A significant improvement in intelligibility from 57% to 73% was found for spatial separation for the same signal-noise ratio (0 dB). Significant effects for improved intelligibility due to spatial separation were also found for higher signal-noise ratios (5 and 10 dB).

  8. Heterotrophy promotes the re-establishment of photosynthate translocation in a symbiotic coral after heat stress

    Science.gov (United States)

    Tremblay, Pascale; Gori, Andrea; Maguer, Jean François; Hoogenboom, Mia; Ferrier-Pagès, Christine

    2016-12-01

    Symbiotic scleractinian corals are particularly affected by climate change stress and respond by bleaching (losing their symbiotic dinoflagellate partners). Recently, the energetic status of corals is emerging as a particularly important factor that determines the corals’ vulnerability to heat stress. However, detailed studies of coral energetic that trace the flow of carbon from symbionts to host are still sparse. The present study thus investigates the impact of heat stress on the nutritional interactions between dinoflagellates and coral Stylophora pistillata maintained under auto- and heterotrophy. First, we demonstrated that the percentage of autotrophic carbon retained in the symbionts was significantly higher during heat stress than under non-stressful conditions, in both fed and unfed colonies. This higher photosynthate retention in symbionts translated into lower rates of carbon translocation, which required the coral host to use tissue energy reserves to sustain its respiratory needs. As calcification rates were positively correlated to carbon translocation, a significant decrease in skeletal growth was observed during heat stress. This study also provides evidence that heterotrophic nutrient supply enhances the re-establishment of normal nutritional exchanges between the two symbiotic partners in the coral S. pistillata, but it did not mitigate the effects of temperature stress on coral calcification.

  9. 78 FR 49693 - Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services...

    Science.gov (United States)

    2013-08-15

    ...-Speech Services for Individuals with Hearing and Speech Disabilities, Report and Order (Order), document...] Speech-to-Speech and Internet Protocol (IP) Speech-to-Speech Telecommunications Relay Services; Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...

  10. Establishment of the optimum two-dimensional electrophoresis system of ovine ovarian tissue.

    Science.gov (United States)

    Jia, J L; Zhang, L P; Wu, J P; Wang, J; Ding, Q

    2014-08-26

    Lambing performance of sheep is the most important economic trait and is regarded as a critic factoring affecting the productivity in sheep industry. Ovary plays the most roles in lambing trait. To establish the optimum two-dimensional electrophoresis system (2-DE) of ovine ovarian tissue, the common protein extraction methods of animal tissue (trichloroacetic acid/acetone precipitation and direct schizolysis methods) were used to extract ovine ovarian protein, and 17-cm nonlinear immobilized PH 3-10 gradient strips were used for 2-DE. The sample handling, loading quantity of the protein sample, and isoelectric focusing (IEF) steps were manipulated and optimized in this study. The results indicate that the direct schizolysis III method, a 200-μg loading quantity of the protein sample, and IEF steps II (20°C active hydration, 14 h→500 V, 1 h→1000 V 1 h→1000-9000 V, 6 h→80,000 VH→500 V 24 h) are optimal for 2-DE analysis of ovine ovarian tissue. Therefore, ovine ovarian tissue proteomics 2-DE was preliminarily established by the optimized conditions in this study; meanwhile, the conditions identified herein could provide a reference for ovarian sample preparation and 2-DE using tissues from other animals.

  11. English speech acquisition in 3- to 5-year-old children learning Russian and English.

    Science.gov (United States)

    Gildersleeve-Neumann, Christina E; Wright, Kira L

    2010-10-01

    English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. One hundred thirty-seven single-word samples were phonetically transcribed from 14 RE and 28 English-only (E) children, ages 3;3 (years;months) to 5;7. Language and age differences were compared descriptively for phonetic inventories. Multivariate analyses compared phoneme accuracy and error rates between the two language groups. RE children produced Russian-influenced phones in English, including palatalized consonants and trills, and demonstrated significantly higher rates of trill substitution, final devoicing, and vowel errors than E children, suggesting Russian language effects on English. RE and E children did not differ in their overall production complexity, with similar final consonant deletion and cluster reduction error rates, similar phonetic inventories by age, and similar levels of phonetic complexity. Both older language groups were more accurate than the younger language groups. We observed effects of Russian on English speech acquisition; however, there were similarities between the RE and E children that have not been reported in previous studies of speech acquisition in bilingual children. These findings underscore the importance of knowing the phonological properties of both languages of a bilingual child in assessment.

  12. The Role of Broca's Area in Speech Perception: Evidence from Aphasia Revisited

    Science.gov (United States)

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-01-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that…

  13. 76 FR 22924 - Re-Establishment of the National Space-Based Positioning, Navigation, and Timing (PNT) Advisory...

    Science.gov (United States)

    2011-04-25

    ... Government is necessary and in the public interest. Accordingly, NASA is re-establishing the National Space... advice on U.S. space-based PNT policy, planning, program management, and funding profiles in relation to... Advisory Board will function solely as an advisory body and will comply fully with the provisions of the...

  14. L. VAN BEETHOVEN IN SPACE OF CINEMATOGRAPH: THE EXPERIENCE OF RE-INTERPRETATION

    Directory of Open Access Journals (Sweden)

    Volkova Polina S.

    2015-01-01

    Full Text Available The sense of a musical work, actualized by film director, may either coincide with its established meaning (interpretation or not necessarily (re-interpretation. The paper presents the experience of re-interpretation of Beethoven’s Ninth Symphony sounding in the films directed by Kubrick (A Clockwork Orange and Tarkovsky (Nostalghia. This refers to the total rethinking of the classical art sample carried out within a cultural context due to the "past life" of a musical work. Consideration of filmmusic performed in reliance on the rhetorical canon as the trinity of Ethos, Logos, and Pathos. In the terminology of Bakhtin, Logos and Ethos are identified at the level of cognitive and ethical aspects of content. As a result, their co-existence creates Pathos like a unit, which "produced and perceived via art". Giving aware of the fact that in "perception a musical work, the intense deepening of ethical moment is permissible" [Bakhtin], author connects filmmusic specific feature with the experience of re-expression of musical language into pictorial speech.

  15. Speech-to-Speech Relay Service

    Science.gov (United States)

    Consumer Guide Speech to Speech Relay Service Speech-to-Speech (STS) is one form of Telecommunications Relay Service (TRS). TRS is a service that allows persons with hearing and speech disabilities ...

  16. Block of glucocorticoid synthesis during re-activation inhibits extinction of an established fear memory.

    Science.gov (United States)

    Blundell, Jacqueline; Blaiss, Cory A; Lagace, Diane C; Eisch, Amelia J; Powell, Craig M

    2011-05-01

    The pharmacology of traumatic memory extinction has not been fully characterized despite its potential as a therapeutic target for established, acquired anxiety disorders, including post-traumatic stress disorder (PTSD). Here we examine the role of endogenous glucocorticoids in traumatic memory extinction. Male C57BL/6J mice were injected with corticosterone (10 mg/kg, i.p.) or metyrapone (50 mg/kg, s.c.) during re-activation of a contextual fear memory, and compared to vehicle groups (N=10-12 per group). To ensure that metyrapone was blocking corticosterone synthesis, we measured corticosterone levels following re-activation of a fear memory in metyrapone- and vehicle-treated animals. Corticosterone administration following extinction trials caused a long-lasting inhibition of the original fear memory trace. In contrast, blockade of corticosteroid synthesis with metyrapone prior to extinction trials enhanced retrieval and prevented extinction of context-dependent fear responses in mice. Further behavioral analysis suggested that the metyrapone enhancement of retrieval and prevention of extinction were not due to non-specific alterations in locomotor or anxiety-like behavior. In addition, the inhibition of extinction by metyrapone was rescued by exogenous administration of corticosterone following extinction trials. Finally, we confirmed that the rise in corticosterone during re-activation of a contextual fear memory was blocked by metyrapone. We demonstrate that extinction of a classical contextual fear memory is dependent on endogenous glucocorticoid synthesis during re-activation of a fear memory. Our data suggest that decreased glucocorticoids during fear memory re-activation may contribute to the inability to extinguish a fear memory, thus contributing to one of the core symptoms of PTSD. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Establishment and function of tissue-resident innate lymphoid cells in the skin

    Directory of Open Access Journals (Sweden)

    Jie Yang

    2017-03-01

    Full Text Available ABSTRACT Innate lymphoid cells (ILCs are a newly classified family of immune cells of the lymphoid lineage. While they could be found in both lymphoid organs and non-lymphoid tissues, ILCs are preferentially enriched in barrier tissues such as the skin, intestine, and lung where they could play important roles in maintenance of tissue integrity and function and protection against assaults of foreign agents. On the other hand, dysregulated activation of ILCs could contribute to tissue inflammatory diseases. In spite of recent progress towards understanding roles of ILCs in the health and disease, mechanisms regulating specific establishment, activation, and function of ILCs in barrier tissues are still poorly understood. We herein review the up-to-date understanding of tissue-specific relevance of ILCs. Particularly we will focus on resident ILCs of the skin, the outmost barrier tissue critical in protection against various foreign hazardous agents and maintenance of thermal and water balance. In addition, we will discuss remaining outstanding questions yet to be addressed.

  18. Establishment and function of tissue-resident innate lymphoid cells in the skin.

    Science.gov (United States)

    Yang, Jie; Zhao, Luming; Xu, Ming; Xiong, Na

    2017-07-01

    Innate lymphoid cells (ILCs) are a newly classified family of immune cells of the lymphoid lineage. While they could be found in both lymphoid organs and non-lymphoid tissues, ILCs are preferentially enriched in barrier tissues such as the skin, intestine, and lung where they could play important roles in maintenance of tissue integrity and function and protection against assaults of foreign agents. On the other hand, dysregulated activation of ILCs could contribute to tissue inflammatory diseases. In spite of recent progress towards understanding roles of ILCs in the health and disease, mechanisms regulating specific establishment, activation, and function of ILCs in barrier tissues are still poorly understood. We herein review the up-to-date understanding of tissue-specific relevance of ILCs. Particularly we will focus on resident ILCs of the skin, the outmost barrier tissue critical in protection against various foreign hazardous agents and maintenance of thermal and water balance. In addition, we will discuss remaining outstanding questions yet to be addressed.

  19. Pulp and periodontal tissue repair - regeneration or tissue metaplasia after dental trauma. A review

    DEFF Research Database (Denmark)

    Andreasen, Jens O

    2012-01-01

    Healing subsequent to dental trauma is known to be very complex, a result explained by the variability of the types of dental trauma (six luxations, nine fracture types, and their combinations). On top of that, at least 16 different cellular systems get involved in more severe trauma types each o...... of tissue replaces the injured). In this study, a review is given of the impact of trauma to various dental tissues such as alveolar bone, periodontal ligament, cementum, Hertvigs epithelial root sheath, and the pulp....... of them with a different potential for healing with repair, i.e. (re-establishment of tissue continuity without functional restitution) and regeneration (where the injured or lost tissue is replaced with new tissue with identical tissue anatomy and function) and finally metaplasia (where a new type...

  20. Causality re-established.

    Science.gov (United States)

    D'Ariano, Giacomo Mauro

    2018-07-13

    Causality has never gained the status of a 'law' or 'principle' in physics. Some recent literature has even popularized the false idea that causality is a notion that should be banned from theory. Such misconception relies on an alleged universality of the reversibility of the laws of physics, based either on the determinism of classical theory, or on the multiverse interpretation of quantum theory, in both cases motivated by mere interpretational requirements for realism of the theory. Here, I will show that a properly defined unambiguous notion of causality is a theorem of quantum theory, which is also a falsifiable proposition of the theory. Such a notion of causality appeared in the literature within the framework of operational probabilistic theories. It is a genuinely theoretical notion, corresponding to establishing a definite partial order among events, in the same way as we do by using the future causal cone on Minkowski space. The notion of causality is logically completely independent of the misidentified concept of 'determinism', and, being a consequence of quantum theory, is ubiquitous in physics. In addition, as classical theory can be regarded as a restriction of quantum theory, causality holds also in the classical case, although the determinism of the theory trivializes it. I then conclude by arguing that causality naturally establishes an arrow of time. This implies that the scenario of the 'block Universe' and the connected 'past hypothesis' are incompatible with causality, and thus with quantum theory: they are both doomed to remain mere interpretations and, as such, are not falsifiable, similar to the hypothesis of 'super-determinism'.This article is part of a discussion meeting issue 'Foundations of quantum mechanics and their impact on contemporary society'. © 2018 The Author(s).

  1. Speech intelligibility of laryngectomized patients who use different types of vocal communication

    OpenAIRE

    Šehović Ivana; Petrović-Lazić Mirjana

    2016-01-01

    Modern methods of speech rehabilitation after a total laryngectomy have come to a great success by giving the patients a possibility to establish an intelligible and functional speech after an adequate rehabilitation treatment. The aim of this paper was to examine speech intelligibility of laryngectomized patients who use different types of vocal communication: esophageal speech, speech with tracheoesophageal prosthesis and speech with electronic laringeal prosthesis. The research was conduct...

  2. Establishing a basic speech repertoire without using NSOME: means, motive, and opportunity.

    Science.gov (United States)

    Davis, Barbara; Velleman, Shelley

    2008-11-01

    Children who are performing at a prelinguistic level of vocal communication present unique issues related to successful intervention relative to the general population of children with speech disorders. These children do not consistently use meaning-based vocalizations to communicate with those around them. General goals for this group of children include stimulating more mature vocalization types and connecting these vocalizations to meanings that can be used to communicate consistently with persons in their environment. We propose a means, motive, and opportunity conceptual framework for assessing and intervening with these children. This framework is centered on stimulation of meaningful vocalizations for functional communication. It is based on a broad body of literature describing the nature of early language development. In contrast, nonspeech oral motor exercise (NSOME) protocols require decontextualized practice of repetitive nonspeech movements that are not related to functional communication with respect to means, motive, or opportunity for communicating. Successful intervention with NSOME activities requires adoption of the concept that the child, operating at a prelinguistic communication level, will generalize from repetitive nonspeech movements that are not intended to communicate with anyone to speech-based movements that will be intelligible enough to allow responsiveness to the child's wants and needs from people in the environment. No evidence from the research literature on the course of speech and language acquisition suggests that this conceptualization is valid.

  3. Establishing Early Functional Perfusion and Structure in Tissue Engineered Cardiac Constructs.

    Science.gov (United States)

    Wang, Bo; Patnaik, Sourav S; Brazile, Bryn; Butler, J Ryan; Claude, Andrew; Zhang, Ge; Guan, Jianjun; Hong, Yi; Liao, Jun

    2015-01-01

    Myocardial infarction (MI) causes massive heart muscle death and remains a leading cause of death in the world. Cardiac tissue engineering aims to replace the infarcted tissues with functional engineered heart muscles or revitalize the infarcted heart by delivering cells, bioactive factors, and/or biomaterials. One major challenge of cardiac tissue engineering and regeneration is the establishment of functional perfusion and structure to achieve timely angiogenesis and effective vascularization, which are essential to the survival of thick implants and the integration of repaired tissue with host heart. In this paper, we review four major approaches to promoting angiogenesis and vascularization in cardiac tissue engineering and regeneration: delivery of pro-angiogenic factors/molecules, direct cell implantation/cell sheet grafting, fabrication of prevascularized cardiac constructs, and the use of bioreactors to promote angiogenesis and vascularization. We further provide a detailed review and discussion on the early perfusion design in nature-derived biomaterials, synthetic biodegradable polymers, tissue-derived acellular scaffolds/whole hearts, and hydrogel derived from extracellular matrix. A better understanding of the current approaches and their advantages, limitations, and hurdles could be useful for developing better materials for future clinical applications.

  4. The establishment of a network of European human research tissue banks.

    Science.gov (United States)

    Orr, Samantha; Alexandre, Eliane; Clark, Brain; Combes, Robert; Fels, Lueder M; Gray, Neil; Jönsson-Rylander, Ann-Cathrine; Helin, Heikki; Koistinen, Jukka; Oinonen, Teija; Richert, Lysiane; Ravid, Rivka; Salonen, Jarmo; Teesalu, Tambet; Thasler, Wolfgang; Trafford, Jacki; Van Der Valk, Jan; Von Versen, Rudiger; Weiss, Thomas; Womack, Chris; Ylikomi, Timo

    2002-01-01

    This is a report of a workshop held on the establishment of human research tissue banking which was held in Levi, Finland 21-24 March 2002. There were 21 participants from 7 European countries. This meeting was attended by representatives from academia, research tissue banks and from the Biotech and Pharmaceutical Industries. The principal aim of the workshop was to find a way to progress the recommendations from ECVAM workshop 44 (ATLA 29, 125-134, 2001) and ECVAM workshop 32 (ATLA 26, 763-777, 1998). The workshop represented the first unofficial meeting of the European Network of Research Tissue Banks (ENRTB) steering group. It is expected that in the period preceding the next workshop the ENRTB steering group will co-ordinate the ethical, legislative and organisational aspects of research tissue banking. Key issues dealt with by the Levi workshop included the practical aspects of sharing expertise and experiences across the different European members. Such collaboration between research tissue banks and end users of such material seeks to ultimately enable shared access to human tissue for medical and pharmaco-toxicological research while maintaining strict adherence to differences in legal and ethical aspects related to the use of human tissue in individual countries.

  5. Requirements for the evaluation of computational speech segregation systems

    DEFF Research Database (Denmark)

    May, Tobias; Dau, Torsten

    2014-01-01

    Recent studies on computational speech segregation reported improved speech intelligibility in noise when estimating and applying an ideal binary mask with supervised learning algorithms. However, an important requirement for such systems in technical applications is their robustness to acoustic...... associated with perceptual attributes in speech segregation. The results could help establish a framework for a systematic evaluation of future segregation systems....

  6. Acquirement and enhancement of remote speech signals

    Science.gov (United States)

    Lü, Tao; Guo, Jin; Zhang, He-yong; Yan, Chun-hui; Wang, Can-jin

    2017-07-01

    To address the challenges of non-cooperative and remote acoustic detection, an all-fiber laser Doppler vibrometer (LDV) is established. The all-fiber LDV system can offer the advantages of smaller size, lightweight design and robust structure, hence it is a better fit for remote speech detection. In order to improve the performance and the efficiency of LDV for long-range hearing, the speech enhancement technology based on optimally modified log-spectral amplitude (OM-LSA) algorithm is used. The experimental results show that the comprehensible speech signals within the range of 150 m can be obtained by the proposed LDV. The signal-to-noise ratio ( SNR) and mean opinion score ( MOS) of the LDV speech signal can be increased by 100% and 27%, respectively, by using the speech enhancement technology. This all-fiber LDV, which combines the speech enhancement technology, can meet the practical demand in engineering.

  7. Re-visiting the electrophysiology of language.

    Science.gov (United States)

    Obleser, Jonas

    2015-09-01

    This editorial accompanies a special issue of Brain and Language re-visiting old themes and new leads in the electrophysiology of language. The event-related potential (ERP) as a series of characteristic deflections ("components") over time and their distribution on the scalp has been exploited by speech and language researchers over decades to find support for diverse psycholinguistic models. Fortunately, methodological and statistical advances have allowed human neuroscience to move beyond some of the limitations imposed when looking at the ERP only. Most importantly, we currently witness a refined and refreshed look at "event-related" (in the literal sense) brain activity that relates itself more closely to the actual neurobiology of speech and language processes. It is this imminent change in handling and interpreting electrophysiological data of speech and language experiments that this special issue intends to capture. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Voice and Speech Quality Perception Assessment and Evaluation

    CERN Document Server

    Jekosch, Ute

    2005-01-01

    Foundations of Voice and Speech Quality Perception starts out with the fundamental question of: "How do listeners perceive voice and speech quality and how can these processes be modeled?" Any quantitative answers require measurements. This is natural for physical quantities but harder to imagine for perceptual measurands. This book approaches the problem by actually identifying major perceptual dimensions of voice and speech quality perception, defining units wherever possible and offering paradigms to position these dimensions into a structural skeleton of perceptual speech and voice quality. The emphasis is placed on voice and speech quality assessment of systems in artificial scenarios. Many scientific fields are involved. This book bridges the gap between two quite diverse fields, engineering and humanities, and establishes the new research area of Voice and Speech Quality Perception.

  9. Neurophysiological influence of musical training on speech perception.

    Science.gov (United States)

    Shahin, Antoine J

    2011-01-01

    Does musical training affect our perception of speech? For example, does learning to play a musical instrument modify the neural circuitry for auditory processing in a way that improves one's ability to perceive speech more clearly in noisy environments? If so, can speech perception in individuals with hearing loss (HL), who struggle in noisy situations, benefit from musical training? While music and speech exhibit some specialization in neural processing, there is evidence suggesting that skills acquired through musical training for specific acoustical processes may transfer to, and thereby improve, speech perception. The neurophysiological mechanisms underlying the influence of musical training on speech processing and the extent of this influence remains a rich area to be explored. A prerequisite for such transfer is the facilitation of greater neurophysiological overlap between speech and music processing following musical training. This review first establishes a neurophysiological link between musical training and speech perception, and subsequently provides further hypotheses on the neurophysiological implications of musical training on speech perception in adverse acoustical environments and in individuals with HL.

  10. SPEECH TACTICS IN MASS MEDIA DISCOURSE

    Directory of Open Access Journals (Sweden)

    Olena Kaptiurova

    2014-06-01

    Full Text Available The article deals with the basic speech tactics used in mass media discourse. It has been stated that such tactics as contact establishment and speech interaction termination, yielding up initiative or its preserving are compulsory for the communicative situation of a talk show. Language personalities of television talk shows anchors and linguistic ways of the interview organisation are stressed. The material is amply illustrated with relevant examples.

  11. An analysis of machine translation and speech synthesis in speech-to-speech translation system

    OpenAIRE

    Hashimoto, K.; Yamagishi, J.; Byrne, W.; King, S.; Tokuda, K.

    2011-01-01

    This paper provides an analysis of the impacts of machine translation and speech synthesis on speech-to-speech translation systems. The speech-to-speech translation system consists of three components: speech recognition, machine translation and speech synthesis. Many techniques for integration of speech recognition and machine translation have been proposed. However, speech synthesis has not yet been considered. Therefore, in this paper, we focus on machine translation and speech synthesis, ...

  12. Establishment of an animal model of mice with radiation- injured soft tissue blood vessels

    International Nuclear Information System (INIS)

    Wang Daiyou; Yu Dahai; Wu Jiaxiao; Wei Shanliang; Wen Yuming

    2004-01-01

    Objective: The aim of this study was to establish an animal model of mice with radiation-injured soft tissue blood vessels. Methods: Forty male mice were irradiated with 30 Gy on the right leg. After the irradiation was finished each of the 40 male mice was tested with angiography, and its muscle tissues on the bilateral legs were examined with vessel staining assay and electron microscopy. Results: The results showed that the number of vessels on the right leg was less than that on the left leg, the microvessel density, average diameter and average sectional area of the right leg were all lower than those of the left, and the configuration and ultra-structure of vessels were also different between both sides of legs. Conclusion: In the study authors successfully established an animal model of mice with radiation-injured soft tissue blood vessels

  13. SPEECH ACT ANALYSIS OF IGBO UTTERANCES IN FUNERAL ...

    African Journals Online (AJOL)

    Dean SPGS NAU

    In other words, a speech act is a .... relationship with that one single person and to share those memories ... identifies four conditions or rules for the effective performance of a ... In other words, the rules establish a system for the ... 54 shaped by the interplay of particular speech acts and non verbal cues. ..... Retrieved from.

  14. A Framework for Speech Enhancement with Ad Hoc Microphone Arrays

    DEFF Research Database (Denmark)

    Tavakoli, Vincent Mohammad; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2016-01-01

    Speech enhancement is vital for improved listening practices. Ad hoc microphone arrays are promising assets for this purpose. Most well-established enhancement techniques with conventional arrays can be adapted into ad hoc scenarios. Despite recent efforts to introduce various ad hoc speech...... enhancement apparatus, a common framework for integration of conventional methods into this new scheme is still missing. This paper establishes such an abstraction based on inter and intra sub-array speech coherencies. Along with measures for signal quality at the input of sub-arrays, a measure of coherency...... is proposed both for sub-array selection in local enhancement approaches, and also for selecting a proper global reference when more than one sub-array are used. Proposed methods within this framework are evaluated with regard to quantitative and qualitative measures, including array gains, the speech...

  15. Hateful Help--A Practical Look at the Issue of Hate Speech.

    Science.gov (United States)

    Shelton, Michael W.

    Many college and university administrators have responded to the recent increase in hateful incidents on campus by putting hate speech codes into place. The establishment of speech codes has sparked a heated debate over the impact that such codes have upon free speech and First Amendment values. Some commentators have suggested that viewing hate…

  16. A homology sound-based algorithm for speech signal interference

    Science.gov (United States)

    Jiang, Yi-jiao; Chen, Hou-jin; Li, Ju-peng; Zhang, Zhan-song

    2015-12-01

    Aiming at secure analog speech communication, a homology sound-based algorithm for speech signal interference is proposed in this paper. We first split speech signal into phonetic fragments by a short-term energy method and establish an interference noise cache library with the phonetic fragments. Then we implement the homology sound interference by mixing the randomly selected interferential fragments and the original speech in real time. The computer simulation results indicated that the interference produced by this algorithm has advantages of real time, randomness, and high correlation with the original signal, comparing with the traditional noise interference methods such as white noise interference. After further studies, the proposed algorithm may be readily used in secure speech communication.

  17. EVOLUTION OF SPEECH: A NEW HYPOTHESIS

    Directory of Open Access Journals (Sweden)

    Shishir

    2016-03-01

    Full Text Available BACKGROUND The first and foremost characteristic of speech is that it is human. Speech is one characteristic feature that has evolved in humans and is by far the most powerful form of communication in the Kingdom Animalia. Today, human has established himself as an alpha species and speech and language evolution has made it possible. But how is speech possible? What anatomical changes have made us possible to speak? A sincere effort has been put in this paper to establish a possible anatomical answer to the riddle. METHODS The prototypes of the cranial skeletons of all the major classes of phylum vertebrata were studied. The materials were studied in museums of Wayanad, Karwar and Museum of Natural History, Imphal. The skeleton of mammal was studied in the Department of Anatomy, K. S. Hegde Medical Academy, Mangalore. RESULTS The curve formed in the base of the skull due to flexion of the splanchnocranium with the neurocranium holds the key to answer of how humans were able to speak. CONCLUSION Of course this may not be the only reason which participated in the evolution of speech like the brain also had to evolve and as a matter of fact the occipital lobes are more prominent in humans when compared to that of the lower mammals. Although, not the only criteria but it is one of the most important thing that has happened in the course of evolution and made us to speak. This small space at the base of the brain is the difference which made us the dominant alpha species.

  18. Neural Entrainment to Speech Modulates Speech Intelligibility

    NARCIS (Netherlands)

    Riecke, Lars; Formisano, Elia; Sorger, Bettina; Baskent, Deniz; Gaudrain, Etienne

    2018-01-01

    Speech is crucial for communication in everyday life. Speech-brain entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is a ubiquitous element of current theories of speech processing. Associations between speech-brain entrainment and

  19. Sensorimotor influences on speech perception in infancy.

    Science.gov (United States)

    Bruderer, Alison G; Danielson, D Kyle; Kandhadai, Padmapriya; Werker, Janet F

    2015-11-03

    The influence of speech production on speech perception is well established in adults. However, because adults have a long history of both perceiving and producing speech, the extent to which the perception-production linkage is due to experience is unknown. We addressed this issue by asking whether articulatory configurations can influence infants' speech perception performance. To eliminate influences from specific linguistic experience, we studied preverbal, 6-mo-old infants and tested the discrimination of a nonnative, and hence never-before-experienced, speech sound distinction. In three experimental studies, we used teething toys to control the position and movement of the tongue tip while the infants listened to the speech sounds. Using ultrasound imaging technology, we verified that the teething toys consistently and effectively constrained the movement and positioning of infants' tongues. With a looking-time procedure, we found that temporarily restraining infants' articulators impeded their discrimination of a nonnative consonant contrast but only when the relevant articulator was selectively restrained to prevent the movements associated with producing those sounds. Our results provide striking evidence that even before infants speak their first words and without specific listening experience, sensorimotor information from the articulators influences speech perception. These results transform theories of speech perception by suggesting that even at the initial stages of development, oral-motor movements influence speech sound discrimination. Moreover, an experimentally induced "impairment" in articulator movement can compromise speech perception performance, raising the question of whether long-term oral-motor impairments may impact perceptual development.

  20. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  1. ON INTEGRATED COURSE “SOCIAL AND SPEECH COMMUNICATIONS” FOR STUDENTS OF ART HIGHER EDUCATIONAL ESTABLISHMENT

    Directory of Open Access Journals (Sweden)

    Elena Nicolaevna Klemenova

    2013-11-01

    Full Text Available The article describes the experience in teaching the course “Social and Speech Communication”. As the result of training the students are to master the arsenal of means for effective communication, the base of which turns out to be linguistic communication and its bearer that is the language personality, get knowledge about complex processes of information exchange, discover the psychological peculiarities of verbal and non-verbal communication, learn how to communicate for solving professional and personal problems.The skill of fluent mastering all kinds of speech activity, the skill of correct and intellectual communication in various spheres and structures, the skill of speech event linguistic analysis including from the point of view of their esthetical value represent the unity of systemic and individual approach in the sphere of humanitarian training for future architects, designers and managers.DOI: http://dx.doi.org/10.12731/2218-7405-2013-7-43

  2. Speech networks at rest and in action: interactions between functional brain networks controlling speech production

    Science.gov (United States)

    Fuertinger, Stefan

    2015-01-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. PMID:25673742

  3. Corollary discharge provides the sensory content of inner speech.

    Science.gov (United States)

    Scott, Mark

    2013-09-01

    Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one's external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one's voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.

  4. Free Speech Yearbook 1979.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The seven articles in this collection deal with theoretical and practical freedom of speech issues. Topics covered are: the United States Supreme Court, motion picture censorship, and the color line; judicial decision making; the established scientific community's suppression of the ideas of Immanuel Velikovsky; the problems of avant-garde jazz,…

  5. The impact of language co-activation on L1 and L2 speech fluency

    NARCIS (Netherlands)

    Bergmann, Christopher; Sprenger, Simone A.; Schmid, Monika S.

    2015-01-01

    Fluent speech depends on the availability of well-established linguistic knowledge and routines for speech planning and articulation. A lack of speech fluency in late second-language (12) learners may point to a deficiency of these representations, due to incomplete acquisition. Experiments on

  6. Inner Speech's Relationship With Overt Speech in Poststroke Aphasia.

    Science.gov (United States)

    Stark, Brielle C; Geva, Sharon; Warburton, Elizabeth A

    2017-09-18

    Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8-111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p speech and language and cognition factors were not significant for the group with relatively good overt speech. As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile. https://doi.org/10.23641/asha.5303542.

  7. Musician advantage for speech-on-speech perception

    NARCIS (Netherlands)

    Başkent, Deniz; Gaudrain, Etienne

    Evidence for transfer of musical training to better perception of speech in noise has been mixed. Unlike speech-in-noise, speech-on-speech perception utilizes many of the skills that musical training improves, such as better pitch perception and stream segregation, as well as use of higher-level

  8. Speech networks at rest and in action: interactions between functional brain networks controlling speech production.

    Science.gov (United States)

    Simonyan, Kristina; Fuertinger, Stefan

    2015-04-01

    Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network. Copyright © 2015 the American Physiological Society.

  9. Re-establishment of the air kerma and ambient dose equivalent standards for the BIPM protection-level 60Co beam

    International Nuclear Information System (INIS)

    Kessler, C.; Roger, P.

    2005-07-01

    The air kerma and ambient dose equivalent standards for the protection-level 60 Co beam have been re-established following the repositioning of the irradiator and modifications to the beam. Details concerning the standards and the new uncertainty budgets are described in this report with their implications for dosimetry comparisons and calibrations. (authors)

  10. Re-generation of tissue about an animal-based scaffold: AMS studies of the fate of the scaffold

    Energy Technology Data Exchange (ETDEWEB)

    Rickey, Frank A. E-mail: far@physics.purdue.edu; Elmore, David; Hillegonds, Darren; Badylak, Stephen; Record, Rae; Simmons-Byrd, Abby

    2000-10-01

    Small intestinal submucosa (SIS) is an unusual tissue, which shows great promise for the repair of damaged tissues in humans. When the SIS is used as a surgical implant, the porcine-derived material is not rejected by the host immune system, and in fact stimulates the constructive re-modeling of damaged tissue. In dogs, these SIS scaffolds have been used to grow new arteries, tendons, and urinary bladders. Moreover, the SIS scaffold tissue seems to disappear from the implant region after a few months. The fate of this SIS tissue is of considerable importance if it is to be used in human tissue repair. SIS is obtained from pigs. We have labeled the SIS in several pigs by intraveneous administration of {sup 14}C enriched proline from the age of three weeks until they reach market weight. The prepared SIS was then implanted in dogs as scaffolds for urinary bladder patches. During the remaining life of each dog, blood, urine and feces samples were collected on a regular schedule. AMS analyses of these specimens were performed to measure the elimination rate of the SIS. At different intervals, the dogs were sacrificed. Tissue samples were analyzed by AMS to determine the whole-body distribution of the labeled SIS.

  11. INTEGRATING MACHINE TRANSLATION AND SPEECH SYNTHESIS COMPONENT FOR ENGLISH TO DRAVIDIAN LANGUAGE SPEECH TO SPEECH TRANSLATION SYSTEM

    Directory of Open Access Journals (Sweden)

    J. SANGEETHA

    2015-02-01

    Full Text Available This paper provides an interface between the machine translation and speech synthesis system for converting English speech to Tamil text in English to Tamil speech to speech translation system. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. Still speech synthesis system has not yet been measured. In this paper, we focus on integration of machine translation and speech synthesis, and report a subjective evaluation to investigate the impact of speech synthesis, machine translation and the integration of machine translation and speech synthesis components. Here we implement a hybrid machine translation (combination of rule based and statistical machine translation and concatenative syllable based speech synthesis technique. In order to retain the naturalness and intelligibility of synthesized speech Auto Associative Neural Network (AANN prosody prediction is used in this work. The results of this system investigation demonstrate that the naturalness and intelligibility of the synthesized speech are strongly influenced by the fluency and correctness of the translated text.

  12. Delineation of a Re-establishing Drainage Network Using SPOT and Landsat Images

    Science.gov (United States)

    Bailey, J. E.; Self, S.; Mouginis-Mark, P. J.

    2008-12-01

    The 1991 eruption of Mt. Pinatubo, The Philippines, provided a unique opportunity to study the effects on the landscape of a large eruption in part because it took place after the advent of regular satellite-based observations. The eruption formed one large (>100km2) ignimbrite sheet, with over 70% of the total deposit deposited in three primary drainage basins to the west of the volcano. High-resolution (20 m/pixel) satellite images, showing the western drainage basins and surrounding region both before and after the eruption were used to observe the re-establishment and evolution of drainage networks on the newly emplaced ignimbrite sheet. Changes in the drainage networks were delineated from a time series of SPOT (Satellite Pour l'Observation de la Terre) and Landsat multi-spectral satellite images. The analysis of which was supplemented by ground- based observations. The satellite images showed that the blue prints for the new drainage systems were established early (within days of the eruption) and at a large-scale followed the pre-eruption pattern. However, the images also illustrated the ephemeral nature of many channels due to the influence of secondary pyroclastic flows, lahar- dammed lake breakouts, stream piracy and shifts due to erosion. Characteristics of the defined drainage networks were used to infer the relative influence on the lahar hazard within each drainage basin.

  13. The functional role of the tonsils in speech.

    Science.gov (United States)

    Finkelstein, Y; Nachmani, A; Ophir, D

    1994-08-01

    To present illustrative cases showing various tonsillar influences on speech and to present a clinical method for patient evaluation establishing concepts of management and a rational therapeutic approach. The cases were selected from a group of approximately 1000 patients referred to the clinic because of suspected palatal diseases. Complete velopharyngeal assessment was made, including otolaryngologic, speech, and hearing examinations, polysomnography, nasendoscopy, multiview videofluoroscopy, and cephalometry. New observations further elucidate the intimate relation between the tonsils and the velopharyngeal valve. The potential influence of the tonsils on the velopharyngeal valve mechanism, in hindering or assisting speech, is described. In selected cases, the decision to perform tonsillectomy depends on its potential effect on speech. The combination of nasendoscopic and multiview videofluoroscopic studies of the mechanical properties of the tonsils during speech is required for patients who present with velopharyngeal insufficiency in whom tonsillar hypertrophy is found. These studies are also required in patients with palatal anomalies who are candidates for tonsillectomy.

  14. The effect of audiovisual and binaural listening on the acceptable noise level (ANL): establishing an ANL conceptual model.

    Science.gov (United States)

    Wu, Yu-Hsiang; Stangl, Elizabeth; Pang, Carol; Zhang, Xuyang

    2014-02-01

    intelligibility and loudness hypotheses. The results of the AV and AO conditions appeared to support the intelligibility hypothesis due to the significant correlation between visual benefit in ANL (AV re: AO ANL) and (1) visual benefit in CST performance (AV re: AO CST) and (2) lipreading skill. The results of the N(o)S(o), NπS(o), and N(u)S(o) conditions negated the intelligibility hypothesis because binaural processing benefit (NπS(o) re: N(o)S(o), and N(u)S(o) re: N(o)S(o)) in ANL was not correlated to that in HINT performance. Instead, the results somewhat supported the loudness hypothesis because the pattern of ANL results across the three conditions (N(o)S(o) ≈ NπS(o) ≈ N(u)S(o) ANL) was more consistent with what was predicted by the loudness hypothesis (N(o)S(o) ≈ NπS(o) binaural and monaural conditions supported neither hypothesis because (1) binaural benefit (binaural re: monaural) in ANL was not correlated to that in speech recognition performance, and (2) the pattern of ANL results across conditions (binaural binaural loudness summation research (binaural ≥ monaural ANL). The study suggests that listeners may use multiple acoustic features to make ANL judgments. The binaural/monaural results showing that neither hypothesis was supported further indicate that factors other than speech intelligibility and loudness, such as psychological factors, may affect ANL. The weightings of different acoustic features in ANL judgments may vary widely across individuals and listening conditions. American Academy of Audiology.

  15. Biodistribution and dosimetric evaluation of 186Re hydroxy ethylen diphosphate

    International Nuclear Information System (INIS)

    Noto, M.G.; Manzini, Alberto

    1987-01-01

    The pharmacokinetics and the dose of radiation absorbed in different body tissues by the administration of 186 Re HEDP (hydroxyethylendiphosphate). The radiation dose in the standard man was established between 5,5 and 25 rad/mCi for red marrow, and red marrow and bone respectively; the radiation dose in metastases would be of 125 rad/mCi. It is concluded that this radiopharmaceutical is suitable for palliative treatment for pain with the mentioned patollogy. (M.E.L.) [es

  16. The establishment of a national tissue bank for inflammatory bowel disease research in Canada.

    Science.gov (United States)

    Collins, Stephen M; McHugh, Kevin; Croitoru, Ken; Howorth, Michael

    2003-02-01

    The Crohn's and Colitis Foundation of Canada (CCFC) has established a national bank for tissue, serum and blood from patients with inflammatory bowel disease (IBD). Investigators from across the country submit material to the bank together with clinical data. Investigators may access their own patient information from the bank for their own study purposes, but the distribution of tissue is restricted to specific CCFC-funded projects. Currently, tissues are being collected from newly diagnosed, untreated IBD patients to support a recent initiative aimed at characterizing microbes in colonic and ileal biopsies from such patients. In the future, criteria for the submission of tissue will be tailored to specific research questions. This bank is believed to be the first national bank of its kind dedicated to research in Crohn's disease and ulcerative colitis

  17. Speech endpoint detection with non-language speech sounds for generic speech processing applications

    Science.gov (United States)

    McClain, Matthew; Romanowski, Brian

    2009-05-01

    Non-language speech sounds (NLSS) are sounds produced by humans that do not carry linguistic information. Examples of these sounds are coughs, clicks, breaths, and filled pauses such as "uh" and "um" in English. NLSS are prominent in conversational speech, but can be a significant source of errors in speech processing applications. Traditionally, these sounds are ignored by speech endpoint detection algorithms, where speech regions are identified in the audio signal prior to processing. The ability to filter NLSS as a pre-processing step can significantly enhance the performance of many speech processing applications, such as speaker identification, language identification, and automatic speech recognition. In order to be used in all such applications, NLSS detection must be performed without the use of language models that provide knowledge of the phonology and lexical structure of speech. This is especially relevant to situations where the languages used in the audio are not known apriori. We present the results of preliminary experiments using data from American and British English speakers, in which segments of audio are classified as language speech sounds (LSS) or NLSS using a set of acoustic features designed for language-agnostic NLSS detection and a hidden-Markov model (HMM) to model speech generation. The results of these experiments indicate that the features and model used are capable of detection certain types of NLSS, such as breaths and clicks, while detection of other types of NLSS such as filled pauses will require future research.

  18. The Functional Connectome of Speech Control.

    Directory of Open Access Journals (Sweden)

    Stefan Fuertinger

    2015-07-01

    forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region.

  19. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    Science.gov (United States)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  20. Establishing the impact of temporary tissue expanders on electron and photon beam dose distributions.

    Science.gov (United States)

    Asena, A; Kairn, T; Crowe, S B; Trapp, J V

    2015-05-01

    This study investigates the effects of temporary tissue expanders (TTEs) on the dose distributions in breast cancer radiotherapy treatments under a variety of conditions. Using EBT2 radiochromic film, both electron and photon beam dose distribution measurements were made for different phantoms, and beam geometries. This was done to establish a more comprehensive understanding of the implant's perturbation effects under a wider variety of conditions. The magnetic disk present in a tissue expander causes a dose reduction of approximately 20% in a photon tangent treatment and 56% in electron boost fields immediately downstream of the implant. The effects of the silicon elastomer are also much more apparent in an electron beam than a photon beam. Evidently, each component of the TTE attenuates the radiation beam to different degrees. This study has demonstrated that the accuracy of photon and electron treatments of post-mastectomy patients is influenced by the presence of a tissue expander for various beam orientations. The impact of TTEs on dose distributions establishes the importance of an accurately modelled high-density implant in the treatment planning system for post-mastectomy patients. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  1. Partially overlapping sensorimotor networks underlie speech praxis and verbal short-term memory: evidence from apraxia of speech following acute stroke.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Chen, Rong; Herskovits, Edward H; Townsley, Sarah; Hillis, Argye E

    2014-01-01

    We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM) depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech, AOS) and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluated areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula) and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex); while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex). These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  2. Cognitive Spare Capacity and Speech Communication: A Narrative Overview

    Directory of Open Access Journals (Sweden)

    Mary Rudner

    2014-01-01

    Full Text Available Background noise can make speech communication tiring and cognitively taxing, especially for individuals with hearing impairment. It is now well established that better working memory capacity is associated with better ability to understand speech under adverse conditions as well as better ability to benefit from the advanced signal processing in modern hearing aids. Recent work has shown that although such processing cannot overcome hearing handicap, it can increase cognitive spare capacity, that is, the ability to engage in higher level processing of speech. This paper surveys recent work on cognitive spare capacity and suggests new avenues of investigation.

  3. The Establishment of a National Tissue Bank for Inflammatory Bowel Disease Research in Canada

    Directory of Open Access Journals (Sweden)

    Stephen M Collins

    2003-01-01

    Full Text Available The Crohn’s and Colitis Foundation of Canada (CCFC has established a national bank for tissue, serum and blood from patients with inflammatory bowel disease (IBD. Investigators from across the country submit material to the bank together with clinical data. Investigators may access their own patient information from the bank for their own study purposes, but the distribution of tissue is restricted to specific CCFC-funded projects. Currently, tissues are being collected from newly diagnosed, untreated IBD patients to support a recent initiative aimed at characterizing microbes in colonic and ileal biopsies from such patients. In the future, criteria for the submission of tissue will be tailored to specific research questions. This bank is believed to be the first national bank of its kind dedicated to research in Crohn’s disease and ulcerative colitis.

  4. Music and Speech Perception in Children Using Sung Speech.

    Science.gov (United States)

    Nie, Yingjiu; Galvin, John J; Morikawa, Michael; André, Victoria; Wheeler, Harley; Fu, Qian-Jie

    2018-01-01

    This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.

  5. [Modeling developmental aspects of sensorimotor control of speech production].

    Science.gov (United States)

    Kröger, B J; Birkholz, P; Neuschaefer-Rube, C

    2007-05-01

    Detailed knowledge of the neurophysiology of speech acquisition is important for understanding the developmental aspects of speech perception and production and for understanding developmental disorders of speech perception and production. A computer implemented neural model of sensorimotor control of speech production was developed. The model is capable of demonstrating the neural functions of different cortical areas during speech production in detail. (i) Two sensory and two motor maps or neural representations and the appertaining neural mappings or projections establish the sensorimotor feedback control system. These maps and mappings are already formed and trained during the prelinguistic phase of speech acquisition. (ii) The feedforward sensorimotor control system comprises the lexical map (representations of sounds, syllables, and words of the first language) and the mappings from lexical to sensory and to motor maps. The training of the appertaining mappings form the linguistic phase of speech acquisition. (iii) Three prelinguistic learning phases--i. e. silent mouthing, quasi stationary vocalic articulation, and realisation of articulatory protogestures--can be defined on the basis of our simulation studies using the computational neural model. These learning phases can be associated with temporal phases of prelinguistic speech acquisition obtained from natural data. The neural model illuminates the detailed function of specific cortical areas during speech production. In particular it can be shown that developmental disorders of speech production may result from a delayed or incorrect process within one of the prelinguistic learning phases defined by the neural model.

  6. 「言い誤り」(speech errors)の傾向に関する考察(IV)

    OpenAIRE

    伊藤, 克敏; Ito, Katsutoshi

    2007-01-01

    This is the fourth in a series (1988, 1992, 1999) of my research on the tendencies of speech errors committed by adults. Collected speech errors were analyzed on phonological, morphological, syntactic and semantic levels. Similarities and differences between adult and child speech errors were discussed. It was pointed out that the typology of speech errors can be established by comparative study of adult speech errors, developing child language, aphasic speech and speech of senile dementia.

  7. Apraxia of Speech

    Science.gov (United States)

    ... Health Info » Voice, Speech, and Language Apraxia of Speech On this page: What is apraxia of speech? ... about apraxia of speech? What is apraxia of speech? Apraxia of speech (AOS)—also known as acquired ...

  8. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    Energy Technology Data Exchange (ETDEWEB)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C. [University Hospital of Cologne, Department of Radiology, Cologne (Germany); Schaarschmidt, Frank [Leibniz Universitaet Hannover, Institute of Biostatistics, Faculty of Natural Sciences, Hannover (Germany); Stehning, Christian [Philips Research, Hamburg (Germany); Schnackenburg, Bernhard [Philips, Healthcare Germany, Hamburg (Germany); Michels, Guido [University Hospital of Cologne, Department III of Internal Medicine, Heart Centre, Cologne (Germany)

    2017-12-15

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  9. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    International Nuclear Information System (INIS)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C.; Schaarschmidt, Frank; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido

    2017-01-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  10. Segmentation cues in conversational speech: Robust semantics and fragile phonotactics

    Directory of Open Access Journals (Sweden)

    Laurence eWhite

    2012-10-01

    Full Text Available Multiple cues influence listeners’ segmentation of connected speech into words, but most previous studies have used stimuli elicited in careful readings rather than natural conversation. Discerning word boundaries in conversational speech may differ from the laboratory setting. In particular, a speaker’s articulatory effort – hyperarticulation vs hypoarticulation (H&H – may vary according to communicative demands, suggesting a compensatory relationship whereby acoustic-phonetic cues are attenuated when other information sources strongly guide segmentation. We examined how listeners’ interpretation of segmentation cues is affected by speech style (spontaneous conversation vs read, using cross-modal identity priming. To elicit spontaneous stimuli, we used a map task in which speakers discussed routes around stylised landmarks. These landmarks were two-word phrases in which the strength of potential segmentation cues – semantic likelihood and cross-boundary diphone phonotactics – was systematically varied. Landmark-carrying utterances were transcribed and later re-recorded as read speech.Independent of speech style, we found an interaction between cue valence (favourable/unfavourable and cue type (phonotactics/semantics. Thus, there was an effect of semantic plausibility, but no effect of cross-boundary phonotactics, indicating that the importance of phonotactic segmentation may have been overstated in studies where lexical information was artificially suppressed. These patterns were unaffected by whether the stimuli were elicited in a spontaneous or read context, even though the difference in speech styles was evident in a main effect. Durational analyses suggested speaker-driven cue trade-offs congruent with an H&H account, but these modulations did not impact on listener behaviour. We conclude that previous research exploiting read speech is reliable in indicating the primacy of lexically-based cues in the segmentation of natural

  11. Common neural substrates support speech and non-speech vocal tract gestures.

    Science.gov (United States)

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M J; Poletto, Christopher J; Ludlow, Christy L

    2009-08-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, was compared to the production of speech syllables without meaning. Brain activation related to overt production was captured with BOLD fMRI using a sparse sampling design for both conditions. Speech and non-speech were compared using voxel-wise whole brain analyses, and ROI analyses focused on frontal and temporoparietal structures previously reported to support speech production. Results showed substantial activation overlap between speech and non-speech function in regions. Although non-speech gesture production showed greater extent and amplitude of activation in the regions examined, both speech and non-speech showed comparable left laterality in activation for both target perception and production. These findings posit a more general role of the previously proposed "auditory dorsal stream" in the left hemisphere--to support the production of vocal tract gestures that are not limited to speech processing.

  12. Establishment of a Pcr Technique for Determination of Htlv-1 Infection in Paraffin-Embedded Tissues

    Directory of Open Access Journals (Sweden)

    M Rastin

    2007-04-01

    Full Text Available Introduction: HTLV-1 , the first known human retrovirus belongs to oncovirus subfamily of retroviruses. The major characteristic of HTLV-1 is its highly restricted geographic prevalence. Northern part of Khorasan is an endemic region of HTLV-1 infection. Epidemiological studies can help in designing preventive programs for HTLV-1 infection. The aim of this study was the establishment of a PCR technique for determination of HTLV-1 infection in paraffin-embedded tissues. Methods: In this experimental laboratory study for establishment of a technique, PCR was initially optimized using Beta-actin primers on various formalin fixed paraffin-embedded tissues from liver, spleen, skin and lymph nodes. The optimized concentration of Mgcl2 was 2mm, primer was 8 pmol. Optimized concentration of DNA was different according to the kind of tissue. HTLV-1 infection was determined by applying tax, pol, env and LTR primers on 50 paraffin-embedded lymph node tissues . The reporoducibility of this technique was shown for skin and lymph node tissues infected with HTLV-1. Resuls: In 50 lymph node tissues, one case with pathologic diagnosis of NHL was positive with all 5 sets of primers (tax, Pol, env and LTR primers and the other case was positive with only two sets of tax primers but was negative with pol, env and LTR primers. The prevalence of infection was 2% among lymph node specimens. (1 of 50 specimens and if the second case is considered, the prevalence would be 4%. Conclusion: Comparison of the results of this study with another study on blood specimens (seroprevalence2.3% was not statistically significant thus confirming the results of one another. (P=0.883

  13. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  14. Thymidine Kinase-Negative Herpes Simplex Virus 1 Can Efficiently Establish Persistent Infection in Neural Tissues of Nude Mice.

    Science.gov (United States)

    Huang, Chih-Yu; Yao, Hui-Wen; Wang, Li-Chiu; Shen, Fang-Hsiu; Hsu, Sheng-Min; Chen, Shun-Hua

    2017-02-15

    Herpes simplex virus 1 (HSV-1) establishes latency in neural tissues of immunocompetent mice but persists in both peripheral and neural tissues of lymphocyte-deficient mice. Thymidine kinase (TK) is believed to be essential for HSV-1 to persist in neural tissues of immunocompromised mice, because infectious virus of a mutant with defects in both TK and UL24 is detected only in peripheral tissues, but not in neural tissues, of severe combined immunodeficiency mice (T. Valyi-Nagy, R. M. Gesser, B. Raengsakulrach, S. L. Deshmane, B. P. Randazzo, A. J. Dillner, and N. W. Fraser, Virology 199:484-490, 1994, https://doi.org/10.1006/viro.1994.1150). Here we find infiltration of CD4 and CD8 T cells in peripheral and neural tissues of mice infected with a TK-negative mutant. We therefore investigated the significance of viral TK and host T cells for HSV-1 to persist in neural tissues using three genetically engineered mutants with defects in only TK or in both TK and UL24 and two strains of nude mice. Surprisingly, all three mutants establish persistent infection in up to 100% of brain stems and 93% of trigeminal ganglia of adult nude mice at 28 days postinfection, as measured by the recovery of infectious virus. Thus, in mouse neural tissues, host T cells block persistent HSV-1 infection, and viral TK is dispensable for the virus to establish persistent infection. Furthermore, we found 30- to 200-fold more virus in neural tissues than in the eye and detected glycoprotein C, a true late viral antigen, in brainstem neurons of nude mice persistently infected with the TK-negative mutant, suggesting that adult mouse neurons can support the replication of TK-negative HSV-1. Acyclovir is used to treat herpes simplex virus 1 (HSV-1)-infected immunocompromised patients, but treatment is hindered by the emergence of drug-resistant viruses, mostly those with mutations in viral thymidine kinase (TK), which activates acyclovir. TK mutants are detected in brains of immunocompromised

  15. Acceptable noise level with Danish, Swedish, and non-semantic speech materials

    DEFF Research Database (Denmark)

    Brännström, K Jonas; Lantz, Johannes; Nielsen, Lars Holme

    2012-01-01

    reported results from American studies. Generally, significant differences were seen between test conditions using different types of noise within ears in each population. Significant differences were seen for ANL across populations, also when the non-semantic ISTS was used as speech signal. Conclusions......Abstract Objective: Acceptable noise level (ANL) has been established as a method to quantify the acceptance of background noise while listening to speech presented at the most comfortable level. The aim of the present study was to generate Danish, Swedish, and a non-semantic version of the ANL...... test and investigate normal-hearing Danish and Swedish subjects' performance on these tests. Design: ANL was measured using Danish and Swedish running speech with two different noises: Speech-weighted amplitude-modulated noise, and multitalker speech babble. ANL was also measured using the non...

  16. Surgical improvement of speech disorder caused by amyotrophic lateral sclerosis.

    Science.gov (United States)

    Saigusa, Hideto; Yamaguchi, Satoshi; Nakamura, Tsuyoshi; Komachi, Taro; Kadosono, Osamu; Ito, Hiroyuki; Saigusa, Makoto; Niimi, Seiji

    2012-12-01

    Amyotrophic lateral sclerosis (ALS) is a progressive debilitating neurological disease. ALS disturbs the quality of life by affecting speech, swallowing and free mobility of the arms without affecting intellectual function. It is therefore of significance to improve intelligibility and quality of speech sounds, especially for ALS patients with slowly progressive courses. Currently, however, there is no effective or established approach to improve speech disorder caused by ALS. We investigated a surgical procedure to improve speech disorder for some patients with neuromuscular diseases with velopharyngeal closure incompetence. In this study, we performed the surgical procedure for two patients suffering from severe speech disorder caused by slowly progressing ALS. The patients suffered from speech disorder with hypernasality and imprecise and weak articulation during a 6-year course (patient 1) and a 3-year course (patient 2) of slowly progressing ALS. We narrowed bilateral lateral palatopharyngeal wall at velopharyngeal port, and performed this surgery under general anesthesia without muscle relaxant for the two patients. Postoperatively, intelligibility and quality of their speech sounds were greatly improved within one month without any speech therapy. The patients were also able to generate longer speech phrases after the surgery. Importantly, there was no serious complication during or after the surgery. In summary, we performed bilateral narrowing of lateral palatopharyngeal wall as a speech surgery for two patients suffering from severe speech disorder associated with ALS. With this technique, improved intelligibility and quality of speech can be maintained for longer duration for the patients with slowly progressing ALS.

  17. Partially Overlapping Sensorimotor Networks Underlie Speech Praxis and Verbal Short-Term Memory: Evidence from Apraxia of Speech Following Acute Stroke

    Directory of Open Access Journals (Sweden)

    Gregory eHickok

    2014-08-01

    Full Text Available We tested the hypothesis that motor planning and programming of speech articulation and verbal short-term memory (vSTM depend on partially overlapping networks of neural regions. We evaluated this proposal by testing 76 individuals with acute ischemic stroke for impairment in motor planning of speech articulation (apraxia of speech; AOS and vSTM in the first day of stroke, before the opportunity for recovery or reorganization of structure-function relationships. We also evaluate areas of both infarct and low blood flow that might have contributed to AOS or impaired vSTM in each person. We found that AOS was associated with tissue dysfunction in motor-related areas (posterior primary motor cortex, pars opercularis; premotor cortex, insula and sensory-related areas (primary somatosensory cortex, secondary somatosensory cortex, parietal operculum/auditory cortex; while impaired vSTM was associated with primarily motor-related areas (pars opercularis and pars triangularis, premotor cortex, and primary motor cortex. These results are consistent with the hypothesis, also supported by functional imaging data, that both speech praxis and vSTM rely on partially overlapping networks of brain regions.

  18. Speech Production and Speech Discrimination by Hearing-Impaired Children.

    Science.gov (United States)

    Novelli-Olmstead, Tina; Ling, Daniel

    1984-01-01

    Seven hearing impaired children (five to seven years old) assigned to the Speakers group made highly significant gains in speech production and auditory discrimination of speech, while Listeners made only slight speech production gains and no gains in auditory discrimination. Combined speech and auditory training was more effective than auditory…

  19. Improving the speech intelligibility in classrooms

    Science.gov (United States)

    Lam, Choi Ling Coriolanus

    One of the major acoustical concerns in classrooms is the establishment of effective verbal communication between teachers and students. Non-optimal acoustical conditions, resulting in reduced verbal communication, can cause two main problems. First, they can lead to reduce learning efficiency. Second, they can also cause fatigue, stress, vocal strain and health problems, such as headaches and sore throats, among teachers who are forced to compensate for poor acoustical conditions by raising their voices. Besides, inadequate acoustical conditions can induce the usage of public address system. Improper usage of such amplifiers or loudspeakers can lead to impairment of students' hearing systems. The social costs of poor classroom acoustics will be large to impair the learning of children. This invisible problem has far reaching implications for learning, but is easily solved. Many researches have been carried out that they have accurately and concisely summarized the research findings on classrooms acoustics. Though, there is still a number of challenging questions remaining unanswered. Most objective indices for speech intelligibility are essentially based on studies of western languages. Even several studies of tonal languages as Mandarin have been conducted, there is much less on Cantonese. In this research, measurements have been done in unoccupied rooms to investigate the acoustical parameters and characteristics of the classrooms. The speech intelligibility tests, which based on English, Mandarin and Cantonese, and the survey were carried out on students aged from 5 years old to 22 years old. It aims to investigate the differences in intelligibility between English, Mandarin and Cantonese of the classrooms in Hong Kong. The significance on speech transmission index (STI) related to Phonetically Balanced (PB) word scores will further be developed. Together with developed empirical relationship between the speech intelligibility in classrooms with the variations

  20. Application of artifical intelligence principles to the analysis of "crazy" speech.

    Science.gov (United States)

    Garfield, D A; Rapp, C

    1994-04-01

    Artificial intelligence computer simulation methods can be used to investigate psychotic or "crazy" speech. Here, symbolic reasoning algorithms establish semantic networks that schematize speech. These semantic networks consist of two main structures: case frames and object taxonomies. Node-based reasoning rules apply to object taxonomies and pathway-based reasoning rules apply to case frames. Normal listeners may recognize speech as "crazy talk" based on violations of node- and pathway-based reasoning rules. In this article, three separate segments of schizophrenic speech illustrate violations of these rules. This artificial intelligence approach is compared and contrasted with other neurolinguistic approaches and is discussed as a conceptual link between neurobiological and psychodynamic understandings of psychopathology.

  1. Effect of dietary restriction and subsequent re-alimentation on the transcriptional profile of hepatic tissue in cattle.

    Science.gov (United States)

    Keogh, Kate; Kenny, David A; Cormican, Paul; Kelly, Alan K; Waters, Sinead M

    2016-03-17

    Compensatory growth (CG) is an accelerated growth phenomenon observed in animals upon re-alimentation following a period of dietary restriction. It is typically utilised in livestock systems to reduce feed costs during periods of reduced feed availability. The biochemical mechanisms controlling this phenomenon, however, are yet to be elucidated. This study aimed to uncover the molecular mechanisms regulating the hepatic expression of CG in cattle, utilising RNAseq. RNAseq was performed on hepatic tissue of bulls following 125 days of dietary restriction (RES) and again following 55 days of subsequent re-alimentation during which the animals exhibited significant CG. The data were compared with those of control animals offered the same diet on an ad libitum basis throughout (ADLIB). Elucidation of the molecular control of CG may yield critical information on genes and pathways which could be targeted as putative molecular biomarkers for the selection of animals with improved CG potential. Following a period of differential feeding, body-weight and liver weight were 161 and 4 kg higher, respectively, for ADLIB compared with RES animals. At this time RNAseq analysis of liver tissue revealed 1352 significantly differentially expressed genes (DEG) between the two treatments. DEGs indicated down-regulation of processes including nutrient transport, cell division and proliferation in RES. In addition, protein synthesis genes were up-regulated in RES following a period of restricted feeding. The subsequent 55 days of ad libitum feeding for both groups resulted in the body-weight difference reduced to 84 kg, with no difference in liver weight between treatment groups. At the end of 55 days of unrestricted feeding, 49 genes were differentially expressed between animals undergoing CG and their continuously fed counterparts. In particular, hepatic expression of cell proliferation and growth genes were greater in animals undergoing CG. Greater expression of cell cycle and cell

  2. Zinc allocation and re-allocation in rice

    Science.gov (United States)

    Stomph, Tjeerd Jan; Jiang, Wen; Van Der Putten, Peter E. L.; Struik, Paul C.

    2014-01-01

    Aims: Agronomy and breeding actively search for options to enhance cereal grain Zn density. Quantifying internal (re-)allocation of Zn as affected by soil and crop management or genotype is crucial. We present experiments supporting the development of a conceptual model of whole plant Zn allocation and re-allocation in rice. Methods: Two solution culture experiments using 70Zn applications at different times during crop development and an experiment on within-grain distribution of Zn are reported. In addition, results from two earlier published experiments are re-analyzed and re-interpreted. Results: A budget analysis showed that plant zinc accumulation during grain filling was larger than zinc allocation to the grains. Isotope data showed that zinc taken up during grain filling was only partly transported directly to the grains and partly allocated to the leaves. Zinc taken up during grain filling and allocated to the leaves replaced zinc re-allocated from leaves to grains. Within the grains, no major transport barrier was observed between vascular tissue and endosperm. At low tissue Zn concentrations, rice plants maintained concentrations of about 20 mg Zn kg−1 dry matter in leaf blades and reproductive tissues, but let Zn concentrations in stems, sheath, and roots drop below this level. When plant zinc concentrations increased, Zn levels in leaf blades and reproductive tissues only showed a moderate increase while Zn levels in stems, roots, and sheaths increased much more and in that order. Conclusions: In rice, the major barrier to enhanced zinc allocation towards grains is between stem and reproductive tissues. Enhancing root to shoot transfer will not contribute proportionally to grain zinc enhancement. PMID:24478788

  3. Stuttering Frequency, Speech Rate, Speech Naturalness, and Speech Effort During the Production of Voluntary Stuttering.

    Science.gov (United States)

    Davidow, Jason H; Grossman, Heather L; Edge, Robin L

    2018-05-01

    Voluntary stuttering techniques involve persons who stutter purposefully interjecting disfluencies into their speech. Little research has been conducted on the impact of these techniques on the speech pattern of persons who stutter. The present study examined whether changes in the frequency of voluntary stuttering accompanied changes in stuttering frequency, articulation rate, speech naturalness, and speech effort. In total, 12 persons who stutter aged 16-34 years participated. Participants read four 300-syllable passages during a control condition, and three voluntary stuttering conditions that involved attempting to produce purposeful, tension-free repetitions of initial sounds or syllables of a word for two or more repetitions (i.e., bouncing). The three voluntary stuttering conditions included bouncing on 5%, 10%, and 15% of syllables read. Friedman tests and follow-up Wilcoxon signed ranks tests were conducted for the statistical analyses. Stuttering frequency, articulation rate, and speech naturalness were significantly different between the voluntary stuttering conditions. Speech effort did not differ between the voluntary stuttering conditions. Stuttering frequency was significantly lower during the three voluntary stuttering conditions compared to the control condition, and speech effort was significantly lower during two of the three voluntary stuttering conditions compared to the control condition. Due to changes in articulation rate across the voluntary stuttering conditions, it is difficult to conclude, as has been suggested previously, that voluntary stuttering is the reason for stuttering reductions found when using voluntary stuttering techniques. Additionally, future investigations should examine different types of voluntary stuttering over an extended period of time to determine their impact on stuttering frequency, speech rate, speech naturalness, and speech effort.

  4. Comprehension of synthetic speech and digitized natural speech by adults with aphasia.

    Science.gov (United States)

    Hux, Karen; Knollman-Porter, Kelly; Brown, Jessica; Wallace, Sarah E

    2017-09-01

    Using text-to-speech technology to provide simultaneous written and auditory content presentation may help compensate for chronic reading challenges if people with aphasia can understand synthetic speech output; however, inherent auditory comprehension challenges experienced by people with aphasia may make understanding synthetic speech difficult. This study's purpose was to compare the preferences and auditory comprehension accuracy of people with aphasia when listening to sentences generated with digitized natural speech, Alex synthetic speech (i.e., Macintosh platform), or David synthetic speech (i.e., Windows platform). The methodology required each of 20 participants with aphasia to select one of four images corresponding in meaning to each of 60 sentences comprising three stimulus sets. Results revealed significantly better accuracy given digitized natural speech than either synthetic speech option; however, individual participant performance analyses revealed three patterns: (a) comparable accuracy regardless of speech condition for 30% of participants, (b) comparable accuracy between digitized natural speech and one, but not both, synthetic speech option for 45% of participants, and (c) greater accuracy with digitized natural speech than with either synthetic speech option for remaining participants. Ranking and Likert-scale rating data revealed a preference for digitized natural speech and David synthetic speech over Alex synthetic speech. Results suggest many individuals with aphasia can comprehend synthetic speech options available on popular operating systems. Further examination of synthetic speech use to support reading comprehension through text-to-speech technology is thus warranted. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  6. Thermolysis of salts of [ReCl6]2- and [ReBr6]2- anions

    International Nuclear Information System (INIS)

    Gubanov, A.I.; Korenev, S.V.; Gromilov, S.A.; Shubin, Yu.V.

    2003-01-01

    Thermal decomposition of the [Pd(NH 3 ) 4 ][ReG 6 ], [Pt(NH 3 ) 4 ][ReG 6 ], (NH 4 ) 2 [ReG 6 ] complexes, where G = Cl, Br, was studied in the inert atmosphere. Certain regularities of the thermolysis were established. Finished products of the thermolysis of binary complexes in the inert atmosphere were demonstrated to be two-phase systems containing two solid solutions - one on the basis of the platinum (palladium) fcc-lattice, another - on the basis of the rhenium hcp lattice. One-phase solid palladium - rhenium solutions were established to be obtained during reduction of the studied complexes in the hydrogen atmosphere [ru

  7. Introductory speeches

    International Nuclear Information System (INIS)

    2001-01-01

    This CD is multimedia presentation of programme safety upgrading of Bohunice V1 NPP. This chapter consist of introductory commentary and 4 introductory speeches (video records): (1) Introductory speech of Vincent Pillar, Board chairman and director general of Slovak electric, Plc. (SE); (2) Introductory speech of Stefan Schmidt, director of SE - Bohunice Nuclear power plants; (3) Introductory speech of Jan Korec, Board chairman and director general of VUJE Trnava, Inc. - Engineering, Design and Research Organisation, Trnava; Introductory speech of Dietrich Kuschel, Senior vice-president of FRAMATOME ANP Project and Engineering

  8. Predicting speech intelligibility in conditions with nonlinearly processed noisy speech

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    The speech-based envelope power spectrum model (sEPSM; [1]) was proposed in order to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII). The sEPSM applies the signal-tonoise ratio in the envelope domain (SNRenv), which was demonstrated...... to successfully predict speech intelligibility in conditions with nonlinearly processed noisy speech, such as processing with spectral subtraction. Moreover, a multiresolution version (mr-sEPSM) was demonstrated to account for speech intelligibility in various conditions with stationary and fluctuating...

  9. Exploring Australian speech-language pathologists' use and perceptions ofnon-speech oral motor exercises.

    Science.gov (United States)

    Rumbach, Anna F; Rose, Tanya A; Cheah, Mynn

    2018-01-29

    To explore Australian speech-language pathologists' use of non-speech oral motor exercises, and rationales for using/not using non-speech oral motor exercises in clinical practice. A total of 124 speech-language pathologists practising in Australia, working with paediatric and/or adult clients with speech sound difficulties, completed an online survey. The majority of speech-language pathologists reported that they did not use non-speech oral motor exercises when working with paediatric or adult clients with speech sound difficulties. However, more than half of the speech-language pathologists working with adult clients who have dysarthria reported using non-speech oral motor exercises with this population. The most frequently reported rationale for using non-speech oral motor exercises in speech sound difficulty management was to improve awareness/placement of articulators. The majority of speech-language pathologists agreed there is no clear clinical or research evidence base to support non-speech oral motor exercise use with clients who have speech sound difficulties. This study provides an overview of Australian speech-language pathologists' reported use and perceptions of non-speech oral motor exercises' applicability and efficacy in treating paediatric and adult clients who have speech sound difficulties. The research findings provide speech-language pathologists with insight into how and why non-speech oral motor exercises are currently used, and adds to the knowledge base regarding Australian speech-language pathology practice of non-speech oral motor exercises in the treatment of speech sound difficulties. Implications for Rehabilitation Non-speech oral motor exercises refer to oral motor activities which do not involve speech, but involve the manipulation or stimulation of oral structures including the lips, tongue, jaw, and soft palate. Non-speech oral motor exercises are intended to improve the function (e.g., movement, strength) of oral structures. The

  10. Re-Presenting Subversive Songs: Applying Strategies for Invention and Arrangement to Nontraditional Speech Texts

    Science.gov (United States)

    Charlesworth, Dacia

    2010-01-01

    Invention deals with the content of a speech, arrangement involves placing the content in an order that is most strategic, style focuses on selecting linguistic devices, such as metaphor, to make the message more appealing, memory assists the speaker in delivering the message correctly, and delivery ideally enables great reception of the message.…

  11. Re-interventions on the thoracic and thoracoabdominal aorta in patients with Marfan syndrome

    OpenAIRE

    Schoenhoff, Florian S.; Carrel, Thierry P.

    2017-01-01

    The advent of multi-gene panel genetic testing and the discovery of new syndromic and non-syndromic forms of connective tissue disorders have established thoracic aortic aneurysms as a genetically mediated disease. Surgical results in patients with Marfan syndrome (MFS) provide an important benchmark for this patient population. Prophylactic aortic root surgery prevents acute dissection and has contributed to the improved survival of MFS patients. In the majority of patients, re-interventions...

  12. [Improving speech comprehension using a new cochlear implant speech processor].

    Science.gov (United States)

    Müller-Deile, J; Kortmann, T; Hoppe, U; Hessel, H; Morsnowski, A

    2009-06-01

    The aim of this multicenter clinical field study was to assess the benefits of the new Freedom 24 sound processor for cochlear implant (CI) users implanted with the Nucleus 24 cochlear implant system. The study included 48 postlingually profoundly deaf experienced CI users who demonstrated speech comprehension performance with their current speech processor on the Oldenburg sentence test (OLSA) in quiet conditions of at least 80% correct scores and who were able to perform adaptive speech threshold testing using the OLSA in noisy conditions. Following baseline measures of speech comprehension performance with their current speech processor, subjects were upgraded to the Freedom 24 speech processor. After a take-home trial period of at least 2 weeks, subject performance was evaluated by measuring the speech reception threshold with the Freiburg multisyllabic word test and speech intelligibility with the Freiburg monosyllabic word test at 50 dB and 70 dB in the sound field. The results demonstrated highly significant benefits for speech comprehension with the new speech processor. Significant benefits for speech comprehension were also demonstrated with the new speech processor when tested in competing background noise.In contrast, use of the Abbreviated Profile of Hearing Aid Benefit (APHAB) did not prove to be a suitably sensitive assessment tool for comparative subjective self-assessment of hearing benefits with each processor. Use of the preprocessing algorithm known as adaptive dynamic range optimization (ADRO) in the Freedom 24 led to additional improvements over the standard upgrade map for speech comprehension in quiet and showed equivalent performance in noise. Through use of the preprocessing beam-forming algorithm BEAM, subjects demonstrated a highly significant improved signal-to-noise ratio for speech comprehension thresholds (i.e., signal-to-noise ratio for 50% speech comprehension scores) when tested with an adaptive procedure using the Oldenburg

  13. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  14. The analysis of speech acts patterns in two Egyptian inaugural speeches

    Directory of Open Access Journals (Sweden)

    Imad Hayif Sameer

    2017-09-01

    Full Text Available The theory of speech acts, which clarifies what people do when they speak, is not about individual words or sentences that form the basic elements of human communication, but rather about particular speech acts that are performed when uttering words. A speech act is the attempt at doing something purely by speaking. Many things can be done by speaking.  Speech acts are studied under what is called speech act theory, and belong to the domain of pragmatics. In this paper, two Egyptian inaugural speeches from El-Sadat and El-Sisi, belonging to different periods were analyzed to find out whether there were differences within this genre in the same culture or not. The study showed that there was a very small difference between these two speeches which were analyzed according to Searle’s theory of speech acts. In El Sadat’s speech, commissives came to occupy the first place. Meanwhile, in El–Sisi’s speech, assertives occupied the first place. Within the speeches of one culture, we can find that the differences depended on the circumstances that surrounded the elections of the Presidents at the time. Speech acts were tools they used to convey what they wanted and to obtain support from their audiences.

  15. Speech Problems

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech Problems KidsHealth / For Teens / Speech Problems What's in ... a person's ability to speak clearly. Some Common Speech and Language Disorders Stuttering is a problem that ...

  16. REPORTED SPEECH IN FICTIONAL NARRATIVE TEXTS IN TERMS OF SPEECH ACTS THEORY SÖZ EDİMLERİ KURAMI AÇISINDAN KURGUSAL ANLATI METİNLERİNDE SÖZ AKTARIMI

    Directory of Open Access Journals (Sweden)

    Soner AKŞEHİRLİ

    2011-06-01

    Full Text Available Speech or discourse reporting (speech representation is a linguistic phenomenon which is seen both in ordinary communication and fictional narrative texts. In linguistics, speech reporting is differentiated as direct, indirect and free-indirect speech. On the other and, speech acts theory, suggested by J.L.Auistin, can provide a new perspective for speech reporting. According to theory, to say or to produce a statement (locutionary act is to perform an act (illocutionary act.Moreover, one can performed an act ifluenced by an locutionary act. In ordinary communication, reporter and in fictional texts narrator may report one, two or all of the locutionary act, illocutionary act and perlocutionary act of reported statement. At the same time, these processes must considered in determining point of view that governing narrative texts. So that, we can develop a new typology of speech reporting for fictional texts Söz ya da söylem aktarımı hem günlük iletişimde hem de kurgusal anlatı metinlerinde sıkça görülen dilbilimsel bir olgudur. Dilbilim açısından söz aktarımı doğrudan, dolaylı ve serbest dolaylı olmak üzere üç temel biçimde ele alınır. J.L.Austin tarafından geliştiren söz edimleri kuramı ise, söz aktarımına farklı bir açıdan bakmamızı sağlayabilir. Kurama göre bir söz söylemek (düzsöz, bir iş yapmaktır (edimsöz. Ayrıca söylenen sözün etkisiyle yapılan bir iş de olabilir (etkisöz. Günlük iletişimde aktarıcı, kurgusal metinlerde ise anlatıcı söz aktarımını gerçekleştirirken, aktardığı sözün düzsöz, edimsöz ve etkisöz bileşenlerinden herhangi birini, ikisini ya da üçünü birden aktarabilir. Bu aynı zamanda anlatısal metinleri yöneten bakış açısının belirlenmesinde de dikkate alınması gereken bir süreçtir. Böylece kurgusal metinler için söz edimleri kuramına dayanan yeni bir söz aktarım tipolojisi oluşturulabilir.

  17. Neural overlap in processing music and speech

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L.

    2015-01-01

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. PMID:25646513

  18. Alternative Speech Communication System for Persons with Severe Speech Disorders

    Science.gov (United States)

    Selouani, Sid-Ahmed; Sidi Yakoub, Mohammed; O'Shaughnessy, Douglas

    2009-12-01

    Assistive speech-enabled systems are proposed to help both French and English speaking persons with various speech disorders. The proposed assistive systems use automatic speech recognition (ASR) and speech synthesis in order to enhance the quality of communication. These systems aim at improving the intelligibility of pathologic speech making it as natural as possible and close to the original voice of the speaker. The resynthesized utterances use new basic units, a new concatenating algorithm and a grafting technique to correct the poorly pronounced phonemes. The ASR responses are uttered by the new speech synthesis system in order to convey an intelligible message to listeners. Experiments involving four American speakers with severe dysarthria and two Acadian French speakers with sound substitution disorders (SSDs) are carried out to demonstrate the efficiency of the proposed methods. An improvement of the Perceptual Evaluation of the Speech Quality (PESQ) value of 5% and more than 20% is achieved by the speech synthesis systems that deal with SSD and dysarthria, respectively.

  19. English Speech Acquisition in 3- to 5-Year-Old Children Learning Russian and English

    Science.gov (United States)

    Gildersleeve-Neumann, Christina E.; Wright, Kira L.

    2010-01-01

    Purpose: English speech acquisition in Russian-English (RE) bilingual children was investigated, exploring the effects of Russian phonetic and phonological properties on English single-word productions. Russian has more complex consonants and clusters and a smaller vowel inventory than English. Method: One hundred thirty-seven single-word samples…

  20. A Danish open-set speech corpus for competing-speech studies

    DEFF Research Database (Denmark)

    Nielsen, Jens Bo; Dau, Torsten; Neher, Tobias

    2014-01-01

    Studies investigating speech-on-speech masking effects commonly use closed-set speech materials such as the coordinate response measure [Bolia et al. (2000). J. Acoust. Soc. Am. 107, 1065-1066]. However, these studies typically result in very low (i.e., negative) speech recognition thresholds (SRTs......) when the competing speech signals are spatially separated. To achieve higher SRTs that correspond more closely to natural communication situations, an open-set, low-context, multi-talker speech corpus was developed. Three sets of 268 unique Danish sentences were created, and each set was recorded...... with one of three professional female talkers. The intelligibility of each sentence in the presence of speech-shaped noise was measured. For each talker, 200 approximately equally intelligible sentences were then selected and systematically distributed into 10 test lists. Test list homogeneity was assessed...

  1. Speech entrainment enables patients with Broca’s aphasia to produce fluent speech

    Science.gov (United States)

    Hubbard, H. Isabel; Hudspeth, Sarah Grace; Holland, Audrey L.; Bonilha, Leonardo; Fromm, Davida; Rorden, Chris

    2012-01-01

    A distinguishing feature of Broca’s aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect ‘speech entrainment’ and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca’s aphasia. In Experiment 1, 13 patients with Broca’s aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca’s area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production

  2. Child Speech, Language and Communication Need Re-Examined in a Public Health Context: A New Direction for the Speech and Language Therapy Profession

    Science.gov (United States)

    Law, James; Reilly, Sheena; Snow, Pamela C.

    2013-01-01

    Background: Historically speech and language therapy services for children have been framed within a rehabilitative framework with explicit assumptions made about providing therapy to individuals. While this is clearly important in many cases, we argue that this model needs revisiting for a number of reasons. First, our understanding of the nature…

  3. Multimodal Speech Capture System for Speech Rehabilitation and Learning.

    Science.gov (United States)

    Sebkhi, Nordine; Desai, Dhyey; Islam, Mohammad; Lu, Jun; Wilson, Kimberly; Ghovanloo, Maysam

    2017-11-01

    Speech-language pathologists (SLPs) are trained to correct articulation of people diagnosed with motor speech disorders by analyzing articulators' motion and assessing speech outcome while patients speak. To assist SLPs in this task, we are presenting the multimodal speech capture system (MSCS) that records and displays kinematics of key speech articulators, the tongue and lips, along with voice, using unobtrusive methods. Collected speech modalities, tongue motion, lips gestures, and voice are visualized not only in real-time to provide patients with instant feedback but also offline to allow SLPs to perform post-analysis of articulators' motion, particularly the tongue, with its prominent but hardly visible role in articulation. We describe the MSCS hardware and software components, and demonstrate its basic visualization capabilities by a healthy individual repeating the words "Hello World." A proof-of-concept prototype has been successfully developed for this purpose, and will be used in future clinical studies to evaluate its potential impact on accelerating speech rehabilitation by enabling patients to speak naturally. Pattern matching algorithms to be applied to the collected data can provide patients with quantitative and objective feedback on their speech performance, unlike current methods that are mostly subjective, and may vary from one SLP to another.

  4. Speech Motor Control in Fluent and Dysfluent Speech Production of an Individual with Apraxia of Speech and Broca's Aphasia

    Science.gov (United States)

    van Lieshout, Pascal H. H. M.; Bose, Arpita; Square, Paula A.; Steele, Catriona M.

    2007-01-01

    Apraxia of speech (AOS) is typically described as a motor-speech disorder with clinically well-defined symptoms, but without a clear understanding of the underlying problems in motor control. A number of studies have compared the speech of subjects with AOS to the fluent speech of controls, but only a few have included speech movement data and if…

  5. Resourcing speech-language pathologists to work with multilingual children.

    Science.gov (United States)

    McLeod, Sharynne

    2014-06-01

    Speech-language pathologists play important roles in supporting people to be competent communicators in the languages of their communities. However, with over 7000 languages spoken throughout the world and the majority of the global population being multilingual, there is often a mismatch between the languages spoken by children and families and their speech-language pathologists. This paper provides insights into service provision for multilingual children within an English-dominant country by viewing Australia's multilingual population as a microcosm of ethnolinguistic minorities. Recent population studies of Australian pre-school children show that their most common languages other than English are: Arabic, Cantonese, Vietnamese, Italian, Mandarin, Spanish, and Greek. Although 20.2% of services by Speech Pathology Australia members are offered in languages other than English, there is a mismatch between the language of the services and the languages of children within similar geographical communities. Australian speech-language pathologists typically use informal or English-based assessments and intervention tools with multilingual children. Thus, there is a need for accessible culturally and linguistically appropriate resources for working with multilingual children. Recent international collaborations have resulted in practical strategies to support speech-language pathologists during assessment, intervention, and collaboration with families, communities, and other professionals. The International Expert Panel on Multilingual Children's Speech was assembled to prepare a position paper to address issues faced by speech-language pathologists when working with multilingual populations. The Multilingual Children's Speech website ( http://www.csu.edu.au/research/multilingual-speech ) addresses one of the aims of the position paper by providing free resources and information for speech-language pathologists about more than 45 languages. These international

  6. The Reliability of Methodological Ratings for speechBITE Using the PEDro-P Scale

    Science.gov (United States)

    Murray, Elizabeth; Power, Emma; Togher, Leanne; McCabe, Patricia; Munro, Natalie; Smith, Katherine

    2013-01-01

    Background: speechBITE (http://www.speechbite.com) is an online database established in order to help speech and language therapists gain faster access to relevant research that can used in clinical decision-making. In addition to containing more than 3000 journal references, the database also provides methodological ratings on the PEDro-P (an…

  7. Engineering Complex Tissues

    Science.gov (United States)

    MIKOS, ANTONIOS G.; HERRING, SUSAN W.; OCHAREON, PANNEE; ELISSEEFF, JENNIFER; LU, HELEN H.; KANDEL, RITA; SCHOEN, FREDERICK J.; TONER, MEHMET; MOONEY, DAVID; ATALA, ANTHONY; VAN DYKE, MARK E.; KAPLAN, DAVID; VUNJAK-NOVAKOVIC, GORDANA

    2010-01-01

    This article summarizes the views expressed at the third session of the workshop “Tissue Engineering—The Next Generation,” which was devoted to the engineering of complex tissue structures. Antonios Mikos described the engineering of complex oral and craniofacial tissues as a “guided interplay” between biomaterial scaffolds, growth factors, and local cell populations toward the restoration of the original architecture and function of complex tissues. Susan Herring, reviewing osteogenesis and vasculogenesis, explained that the vascular arrangement precedes and dictates the architecture of the new bone, and proposed that engineering of osseous tissues might benefit from preconstruction of an appropriate vasculature. Jennifer Elisseeff explored the formation of complex tissue structures based on the example of stratified cartilage engineered using stem cells and hydrogels. Helen Lu discussed engineering of tissue interfaces, a problem critical for biological fixation of tendons and ligaments, and the development of a new generation of fixation devices. Rita Kandel discussed the challenges related to the re-creation of the cartilage-bone interface, in the context of tissue engineered joint repair. Frederick Schoen emphasized, in the context of heart valve engineering, the need for including the requirements derived from “adult biology” of tissue remodeling and establishing reliable early predictors of success or failure of tissue engineered implants. Mehmet Toner presented a review of biopreservation techniques and stressed that a new breakthrough in this field may be necessary to meet all the needs of tissue engineering. David Mooney described systems providing temporal and spatial regulation of growth factor availability, which may find utility in virtually all tissue engineering and regeneration applications, including directed in vitro and in vivo vascularization of tissues. Anthony Atala offered a clinician’s perspective for functional tissue

  8. Effects of age on electrophysiological correlates of speech processing in a dynamic cocktail-party situation

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2015-09-01

    Full Text Available Successful speech perception in multi-speaker environments depends on auditory scene analysis, comprising auditory object segregation and grouping, and on focusing attention toward the speaker of interest. Changes in speaker settings (e.g., in speaker position require object re-selection and attention re-focusing. Here, we tested the processing of changes in a realistic multi-speaker scenario in younger and older adults, employing a speech-perception task and event-related potential (ERP measures. Sequences of short words (combinations of company names and values were simultaneously presented via four loudspeakers at different locations, and the participants responded to the value of a target company. Voice and position of the speaker of the target information were kept constant for a variable number of trials and then changed. Relative to the pre-change level, changes caused higher error rates, and more so in older than younger adults. The ERP analysis revealed stronger fronto-central N2 and N400 components in younger adults, suggesting a more effective inhibition of concurrent speech stimuli and enhanced language processing. The difference ERPs (post-change minus pre-change indicated a change-related N400 and late positive complex (LPC over parietal areas in both groups. Only the older adults showed an additional frontal LPC, suggesting increased allocation of attentional resources after changes in speaker settings. In sum, changes in speaker settings are critical events for speech perception in multi-speaker environments. Especially older persons show deficits that could be based on less flexible inhibitory control and increased distraction.

  9. Participation of the classical speech areas in auditory long-term memory.

    Directory of Open Access Journals (Sweden)

    Anke Ninija Karabanov

    Full Text Available Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG and in the inferior frontal gyrus (IFG may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.

  10. Participation of the classical speech areas in auditory long-term memory.

    Science.gov (United States)

    Karabanov, Anke Ninija; Paine, Rainer; Chao, Chi Chao; Schulze, Katrin; Scott, Brian; Hallett, Mark; Mishkin, Mortimer

    2015-01-01

    Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.

  11. Adaptation to delayed auditory feedback induces the temporal recalibration effect in both speech perception and production.

    Science.gov (United States)

    Yamamoto, Kosuke; Kawabata, Hideaki

    2014-12-01

    We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.

  12. Neural overlap in processing music and speech.

    Science.gov (United States)

    Peretz, Isabelle; Vuvan, Dominique; Lagrois, Marie-Élaine; Armony, Jorge L

    2015-03-19

    Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. How musical expertise shapes speech perception: evidence from auditory classification images.

    Science.gov (United States)

    Varnet, Léo; Wang, Tianyun; Peter, Chloe; Meunier, Fanny; Hoen, Michel

    2015-09-24

    It is now well established that extensive musical training percolates to higher levels of cognition, such as speech processing. However, the lack of a precise technique to investigate the specific listening strategy involved in speech comprehension has made it difficult to determine how musicians' higher performance in non-speech tasks contributes to their enhanced speech comprehension. The recently developed Auditory Classification Image approach reveals the precise time-frequency regions used by participants when performing phonemic categorizations in noise. Here we used this technique on 19 non-musicians and 19 professional musicians. We found that both groups used very similar listening strategies, but the musicians relied more heavily on the two main acoustic cues, at the first formant onset and at the onsets of the second and third formants onsets. Additionally, they responded more consistently to stimuli. These observations provide a direct visualization of auditory plasticity resulting from extensive musical training and shed light on the level of functional transfer between auditory processing and speech perception.

  14. Enhancement of speech signals - with a focus on voiced speech models

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie

    This thesis deals with speech enhancement, i.e., noise reduction in speech signals. This has applications in, e.g., hearing aids and teleconference systems. We consider a signal-driven approach to speech enhancement where a model of the speech is assumed and filters are generated based...... on this model. The basic model used in this thesis is the harmonic model which is a commonly used model for describing the voiced part of the speech signal. We show that it can be beneficial to extend the model to take inharmonicities or the non-stationarity of speech into account. Extending the model...

  15. Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.

    Science.gov (United States)

    Schoenmaker, Esther; van de Par, Steven

    2016-01-01

    Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.

  16. An experimental Dutch keyboard-to-speech system for the speech impaired

    NARCIS (Netherlands)

    Deliege, R.J.H.

    1989-01-01

    An experimental Dutch keyboard-to-speech system has been developed to explor the possibilities and limitations of Dutch speech synthesis in a communication aid for the speech impaired. The system uses diphones and a formant synthesizer chip for speech synthesis. Input to the system is in

  17. Speech Function and Speech Role in Carl Fredricksen's Dialogue on Up Movie

    OpenAIRE

    Rehana, Ridha; Silitonga, Sortha

    2013-01-01

    One aim of this article is to show through a concrete example how speech function and speech role used in movie. The illustrative example is taken from the dialogue of Up movie. Central to the analysis proper form of dialogue on Up movie that contain of speech function and speech role; i.e. statement, offer, question, command, giving, and demanding. 269 dialogue were interpreted by actor, and it was found that the use of speech function and speech role.

  18. Experimental comparison between speech transmission index, rapid speech transmission index, and speech intelligibility index.

    Science.gov (United States)

    Larm, Petra; Hongisto, Valtteri

    2006-02-01

    During the acoustical design of, e.g., auditoria or open-plan offices, it is important to know how speech can be perceived in various parts of the room. Different objective methods have been developed to measure and predict speech intelligibility, and these have been extensively used in various spaces. In this study, two such methods were compared, the speech transmission index (STI) and the speech intelligibility index (SII). Also the simplification of the STI, the room acoustics speech transmission index (RASTI), was considered. These quantities are all based on determining an apparent speech-to-noise ratio on selected frequency bands and summing them using a specific weighting. For comparison, some data were needed on the possible differences of these methods resulting from the calculation scheme and also measuring equipment. Their prediction accuracy was also of interest. Measurements were made in a laboratory having adjustable noise level and absorption, and in a real auditorium. It was found that the measurement equipment, especially the selection of the loudspeaker, can greatly affect the accuracy of the results. The prediction accuracy of the RASTI was found acceptable, if the input values for the prediction are accurately known, even though the studied space was not ideally diffuse.

  19. Population Health in Pediatric Speech and Language Disorders: Available Data Sources and a Research Agenda for the Field.

    Science.gov (United States)

    Raghavan, Ramesh; Camarata, Stephen; White, Karl; Barbaresi, William; Parish, Susan; Krahn, Gloria

    2018-05-17

    The aim of the study was to provide an overview of population science as applied to speech and language disorders, illustrate data sources, and advance a research agenda on the epidemiology of these conditions. Computer-aided database searches were performed to identify key national surveys and other sources of data necessary to establish the incidence, prevalence, and course and outcome of speech and language disorders. This article also summarizes a research agenda that could enhance our understanding of the epidemiology of these disorders. Although the data yielded estimates of prevalence and incidence for speech and language disorders, existing sources of data are inadequate to establish reliable rates of incidence, prevalence, and outcomes for speech and language disorders at the population level. Greater support for inclusion of speech and language disorder-relevant questions is necessary in national health surveys to build the population science in the field.

  20. The Application of Tissue Engineering Procedures to Repair the Larynx

    Science.gov (United States)

    Ringel, Robert L.; Kahane, Joel C.; Hillsamer, Peter J.; Lee, Annie S.; Badylak, Stephen F.

    2006-01-01

    The field of tissue engineering/regenerative medicine combines the quantitative principles of engineering with the principles of the life sciences toward the goal of reconstituting structurally and functionally normal tissues and organs. There has been relatively little application of tissue engineering efforts toward the organs of speech, voice,…

  1. Robust Speech/Non-Speech Classification in Heterogeneous Multimedia Content

    NARCIS (Netherlands)

    Huijbregts, M.A.H.; de Jong, Franciska M.G.

    In this paper we present a speech/non-speech classification method that allows high quality classification without the need to know in advance what kinds of audible non-speech events are present in an audio recording and that does not require a single parameter to be tuned on in-domain data. Because

  2. Speech and language development in 2-year-old children with cerebral palsy.

    Science.gov (United States)

    Hustad, Katherine C; Allison, Kristen; McFadd, Emily; Riehle, Katherine

    2014-06-01

    We examined early speech and language development in children who had cerebral palsy. Questions addressed whether children could be classified into early profile groups on the basis of speech and language skills and whether there were differences on selected speech and language measures among groups. Speech and language assessments were completed on 27 children with CP who were between the ages of 24 and 30 months (mean age 27.1 months; SD 1.8). We examined several measures of expressive and receptive language, along with speech intelligibility. Two-step cluster analysis was used to identify homogeneous groups of children based on their performance on the seven dependent variables characterizing speech and language performance. Three groups of children identified were those not yet talking (44% of the sample); those whose talking abilities appeared to be emerging (41% of the sample); and those who were established talkers (15% of the sample). Group differences were evident on all variables except receptive language skills. 85% of 2-year-old children with CP in this study had clinical speech and/or language delays relative to age expectations. Findings suggest that children with CP should receive speech and language assessment and treatment at or before 2 years of age.

  3. Evaluation of speech errors in Putonghua speakers with cleft palate: a critical review of methodology issues.

    Science.gov (United States)

    Jiang, Chenghui; Whitehill, Tara L

    2014-04-01

    Speech errors associated with cleft palate are well established for English and several other Indo-European languages. Few articles describing the speech of Putonghua (standard Mandarin Chinese) speakers with cleft palate have been published in English language journals. Although methodological guidelines have been published for the perceptual speech evaluation of individuals with cleft palate, there has been no critical review of methodological issues in studies of Putonghua speakers with cleft palate. A literature search was conducted to identify relevant studies published over the past 30 years in Chinese language journals. Only studies incorporating perceptual analysis of speech were included. Thirty-seven articles which met inclusion criteria were analyzed and coded on a number of methodological variables. Reliability was established by having all variables recoded for all studies. This critical review identified many methodological issues. These design flaws make it difficult to draw reliable conclusions about characteristic speech errors in this group of speakers. Specific recommendations are made to improve the reliability and validity of future studies, as well to facilitate cross-center comparisons.

  4. Intelligibility of speech of children with speech and sound disorders

    OpenAIRE

    Ivetac, Tina

    2014-01-01

    The purpose of this study is to examine speech intelligibility of children with primary speech and sound disorders aged 3 to 6 years in everyday life. The research problem is based on the degree to which parents or guardians, immediate family members (sister, brother, grandparents), extended family members (aunt, uncle, cousin), child's friends, other acquaintances, child's teachers and strangers understand the speech of children with speech sound disorders. We examined whether the level ...

  5. Constraints on the Transfer of Perceptual Learning in Accented Speech

    Science.gov (United States)

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  6. Speech disorders - children

    Science.gov (United States)

    ... disorder; Voice disorders; Vocal disorders; Disfluency; Communication disorder - speech disorder; Speech disorder - stuttering ... evaluation tools that can help identify and diagnose speech disorders: Denver Articulation Screening Examination Goldman-Fristoe Test of ...

  7. Neurophysiology of speech differences in childhood apraxia of speech.

    Science.gov (United States)

    Preston, Jonathan L; Molfese, Peter J; Gumkowski, Nina; Sorcinelli, Andrea; Harwood, Vanessa; Irwin, Julia R; Landi, Nicole

    2014-01-01

    Event-related potentials (ERPs) were recorded during a picture naming task of simple and complex words in children with typical speech and with childhood apraxia of speech (CAS). Results reveal reduced amplitude prior to speaking complex (multisyllabic) words relative to simple (monosyllabic) words for the CAS group over the right hemisphere during a time window thought to reflect phonological encoding of word forms. Group differences were also observed prior to production of spoken tokens regardless of word complexity during a time window just prior to speech onset (thought to reflect motor planning/programming). Results suggest differences in pre-speech neurolinguistic processes.

  8. Live Speech Driven Head-and-Eye Motion Generators.

    Science.gov (United States)

    Le, Binh H; Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.

  9. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003

  11. Engineering complex orthopaedic tissues via strategic biomimicry.

    Science.gov (United States)

    Qu, Dovina; Mosher, Christopher Z; Boushell, Margaret K; Lu, Helen H

    2015-03-01

    The primary current challenge in regenerative engineering resides in the simultaneous formation of more than one type of tissue, as well as their functional assembly into complex tissues or organ systems. Tissue-tissue synchrony is especially important in the musculoskeletal system, wherein overall organ function is enabled by the seamless integration of bone with soft tissues such as ligament, tendon, or cartilage, as well as the integration of muscle with tendon. Therefore, in lieu of a traditional single-tissue system (e.g., bone, ligament), composite tissue scaffold designs for the regeneration of functional connective tissue units (e.g., bone-ligament-bone) are being actively investigated. Closely related is the effort to re-establish tissue-tissue interfaces, which is essential for joining these tissue building blocks and facilitating host integration. Much of the research at the forefront of the field has centered on bioinspired stratified or gradient scaffold designs which aim to recapitulate the structural and compositional inhomogeneity inherent across distinct tissue regions. As such, given the complexity of these musculoskeletal tissue units, the key question is how to identify the most relevant parameters for recapitulating the native structure-function relationships in the scaffold design. Therefore, the focus of this review, in addition to presenting the state-of-the-art in complex scaffold design, is to explore how strategic biomimicry can be applied in engineering tissue connectivity. The objective of strategic biomimicry is to avoid over-engineering by establishing what needs to be learned from nature and defining the essential matrix characteristics that must be reproduced in scaffold design. Application of this engineering strategy for the regeneration of the most common musculoskeletal tissue units (e.g., bone-ligament-bone, muscle-tendon-bone, cartilage-bone) will be discussed in this review. It is anticipated that these exciting efforts will

  12. Engineering Complex Orthopaedic Tissues via Strategic Biomimicry

    Science.gov (United States)

    Qu, Dovina; Mosher, Christopher Z.; Boushell, Margaret K.; Lu, Helen H.

    2014-01-01

    The primary current challenge in regenerative engineering resides in the simultaneous formation of more than one type of tissue, as well as their functional assembly into complex tissues or organ systems. Tissue-tissue synchrony is especially important in the musculoskeletal system, whereby overall organ function is enabled by the seamless integration of bone with soft tissues such as ligament, tendon, or cartilage, as well as the integration of muscle with tendon. Therefore, in lieu of a traditional single-tissue system (e.g. bone, ligament), composite tissue scaffold designs for the regeneration of functional connective tissue units (e.g. bone-ligament-bone) are being actively investigated. Closely related is the effort to re-establish tissue-tissue interfaces, which is essential for joining these tissue building blocks and facilitating host integration. Much of the research at the forefront of the field has centered on bioinspired stratified or gradient scaffold designs which aim to recapitulate the structural and compositional inhomogeneity inherent across distinct tissue regions. As such, given the complexity of these musculoskeletal tissue units, the key question is how to identify the most relevant parameters for recapitulating the native structure-function relationships in the scaffold design. Therefore, the focus of this review, in addition to presenting the state-of-the-art in complex scaffold design, is to explore how strategic biomimicry can be applied in engineering tissue connectivity. The objective of strategic biomimicry is to avoid over-engineering by establishing what needs to be learned from nature and defining the essential matrix characteristics that must be reproduced in scaffold design. Application of this engineering strategy for the regeneration of the most common musculoskeletal tissue units (e.g. bone-ligament-bone, muscle-tendon-bone, cartilage-bone) will be discussed in this review. It is anticipated that these exciting efforts will

  13. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    Science.gov (United States)

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  14. Targeted photodynamic therapy of established soft-tissue infections in mice

    Science.gov (United States)

    Gad, Faten; Zahra, Touqir; Hasan, Tayyaba; Hamblin, Michael R.

    2004-06-01

    The worldwide rise in antibiotic resistance necessitates the development of novel antimicrobial strategies. Although many workers have used photodynamic therapy (PDT) to kill bacteria in vitro, the use of this approach has seldom been reported in vivo in animal models of infection. We have previously described the first use of PDT to treat excisional wound infections by Gram-negative bacteria in living mice. However these infected wound models used a short time after infection (30 min) before PDT. We now report on the use of PDT to treat an established soft-tissue infection in mice. We used Staphylococcus aureus stably transformed with a Photorhabdus luminescens lux operon (luxABCDE) that was genetically modified to be functional in Gram-positive bacteria. These engineered bacteria emitted bioluminescence allowing the progress of the infection to be monitored in both space and time with a lowlight imaging charged couple device (CCD) camera. One million cells were injected into one or both thigh muscles of mice that had previously been rendered neutropenic by cyclophosphamide administration. Twenty-four hours later the bacteria had multiplied more than one hundred-fold, and poly-L-lysine chlorin(e6) conjugate or free chlorin(e6) was injected into one area of infected muscle and imaged with the CCD camera. Thirty-minutes later red light from a diode laser was delivered as a surface spot or by interstitial fiber into the infection. There was a lightdose dependent loss of bioluminescence (to resistant soft-tissue infections.

  15. Subsidence Reversal in a Re-established Wetland in the Sacramento-San Joaquin Delta, California, USA

    Directory of Open Access Journals (Sweden)

    Robin L. Miller

    2008-10-01

    Full Text Available The stability of levees in the Sacramento-San Joaquin Delta is threatened by continued subsidence of Delta peat islands. Up to 6 meters of land-surface elevation has been lost in the 150 years since Delta marshes were leveed and drained, primarily from oxidation of peat soils. Flooding subsided peat islands halts peat oxidation by creating anoxic soils, but net accumulation of new material in restored wetlands is required to recover land-surface elevations. We investigated the subsidence reversal potential of two 3 hectare, permanently flooded, impounded wetlands re-established on a deeply subsided field on Twitchell Island. The shallower wetland (design water depth 25 cm was almost completely colonized by dense emergent marsh vegetation within two years; whereas, the deeper wetland (design water depth 55 cm which developed spatially variable depths as a result of heterogeneous colonization by emergent vegetation, still had some areas remaining as open water after nine years. Changes in land-surface elevation were quantified using repeated sedimentation-erosion table measurements. New material accumulating in the wetlands was sampled by coring. Land-surface elevations increased by an average of 4 cm/yr in both wetlands from 1997 to 2006; however, the rates at different sites in the wetlands ranged from -0.5 to +9.2 cm/yr. Open water areas of the deeper wetland without emergent vegetation had the lowest rates of land-surface elevation gain. The greatest rates occurred in areas of the deeper wetland most isolated from the river water inlets, with dense stands of emergent marsh vegetation (tules and cattails. Vegetated areas of the deeper wetland in the transition zones between open water and mature emergent stands had intermediate rates of land-surface gain, as did the entire shallower wetland. These results suggest that the dominant component contributing to land-surface elevation gain in these wetlands was accumulation of organic matter, rather

  16. Listeners Experience Linguistic Masking Release in Noise-Vocoded Speech-in-Speech Recognition

    Science.gov (United States)

    Viswanathan, Navin; Kokkinakis, Kostas; Williams, Brittany T.

    2018-01-01

    Purpose: The purpose of this study was to evaluate whether listeners with normal hearing perceiving noise-vocoded speech-in-speech demonstrate better intelligibility of target speech when the background speech was mismatched in language (linguistic release from masking [LRM]) and/or location (spatial release from masking [SRM]) relative to the…

  17. Perspectives of Speech-Language Pathologists on the Use of Telepractice in Schools: Quantitative Survey Results

    Directory of Open Access Journals (Sweden)

    Janice K. Tucker

    2012-12-01

    Full Text Available This research surveyed 170 school-based speech-language pathologists (SLPs in one northeastern state, with only 1.8% reporting telepractice use in school-settings. These results were consistent with two ASHA surveys (2002; 2011 that reported limited use of telepractice for school-based speech-language pathology. In the present study, willingness to use telepractice was inversely related to age, perhaps because younger members of the profession are more accustomed to using technology.  Overall, respondents were concerned about the validity of assessments administered via telepractice; whether clinicians can adequately establish rapport with clients via telepractice; and if therapy conducted via telepractice can be as effective as in-person speech-language therapy. Most respondents indicated the need to establish procedures and guidelines for school-based telepractice programs.

  18. Speech Perception and Short-Term Memory Deficits in Persistent Developmental Speech Disorder

    Science.gov (United States)

    Kenney, Mary Kay; Barac-Cikoja, Dragana; Finnegan, Kimberly; Jeffries, Neal; Ludlow, Christy L.

    2006-01-01

    Children with developmental speech disorders may have additional deficits in speech perception and/or short-term memory. To determine whether these are only transient developmental delays that can accompany the disorder in childhood or persist as part of the speech disorder, adults with a persistent familial speech disorder were tested on speech…

  19. Automatic speech recognition (ASR) based approach for speech therapy of aphasic patients: A review

    Science.gov (United States)

    Jamal, Norezmi; Shanta, Shahnoor; Mahmud, Farhanahani; Sha'abani, MNAH

    2017-09-01

    This paper reviews the state-of-the-art an automatic speech recognition (ASR) based approach for speech therapy of aphasic patients. Aphasia is a condition in which the affected person suffers from speech and language disorder resulting from a stroke or brain injury. Since there is a growing body of evidence indicating the possibility of improving the symptoms at an early stage, ASR based solutions are increasingly being researched for speech and language therapy. ASR is a technology that transfers human speech into transcript text by matching with the system's library. This is particularly useful in speech rehabilitation therapies as they provide accurate, real-time evaluation for speech input from an individual with speech disorder. ASR based approaches for speech therapy recognize the speech input from the aphasic patient and provide real-time feedback response to their mistakes. However, the accuracy of ASR is dependent on many factors such as, phoneme recognition, speech continuity, speaker and environmental differences as well as our depth of knowledge on human language understanding. Hence, the review examines recent development of ASR technologies and its performance for individuals with speech and language disorders.

  20. The impact of orthographic knowledge on speech processing

    Directory of Open Access Journals (Sweden)

    Régine Kolinsky

    2012-12-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2012n63p161   The levels-of-processing approach to speech processing (cf. Kolinsky, 1998 distinguishes three levels, from bottom to top: perception, recognition (which involves activation of stored knowledge and formal explicit analysis or comparison (which belongs to metalinguistic ability, and assumes that only the former is immune to literacy-dependent knowledge.  in this contribution, we first briefly review the main ideas and evidence supporting the role of learning to read in the alphabetic system in the development of conscious representations of phonemes, and we contrast conscious and unconscious representations of phonemes. Then, we examine in detail recent compelling behavioral and neuroscientific evidence for the involvement of orthographic representation in the recognition of spoken words. We conclude by arguing that there is a strong need of theoretical re-elaboration of the models of speech recognition, which typically have ignored the influence of reading acquisition.

  1. Plant community, primary productivity, and environmental conditions following wetland re-establishment in the Sacramento-San Joaquin Delta, California

    Science.gov (United States)

    Miller, R.L.; Fujii, R.

    2010-01-01

    Wetland restoration can mitigate aerobic decomposition of subsided organic soils, as well as re-establish conditions favorable for carbon storage. Rates of carbon storage result from the balance of inputs and losses, both of which are affected by wetland hydrology. We followed the effect of water depth (25 and 55 cm) on the plant community, primary production, and changes in two re-established wetlands in the Sacramento San-Joaquin River Delta, California for 9 years after flooding to determine how relatively small differences in water depth affect carbon storage rates over time. To estimate annual carbon inputs, plant species cover, standing above- and below-ground plant biomass, and annual biomass turnover rates were measured, and allometric biomass models for Schoenoplectus (Scirpus) acutus and Typha spp., the emergent marsh dominants, were developed. As the wetlands developed, environmental factors, including water temperature, depth, and pH were measured. Emergent marsh vegetation colonized the shallow wetland more rapidly than the deeper wetland. This is important to potential carbon storage because emergent marsh vegetation is more productive, and less labile, than submerged and floating vegetation. Primary production of emergent marsh vegetation ranged from 1.3 to 3.2 kg of carbon per square meter annually; and, mid-season standing live biomass represented about half of the annual primary production. Changes in species composition occurred in both submerged and emergent plant communities as the wetlands matured. Water depth, temperature, and pH were lower in areas with emergent marsh vegetation compared to submerged vegetation, all of which, in turn, can affect carbon cycling and storage rates. ?? Springer Science+Business Media B.V. 2009.

  2. Duration and speed of speech events: A selection of methods

    Directory of Open Access Journals (Sweden)

    Gibbon Dafydd

    2015-07-01

    Full Text Available The study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory, syntagmatic (compositional and functional (discourse-oriented dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis, others are discussed in a critical light (e.g. so-called ‘rhythm metrics’. A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.

  3. Speech and Language Delay

    Science.gov (United States)

    ... OTC Relief for Diarrhea Home Diseases and Conditions Speech and Language Delay Condition Speech and Language Delay Share Print Table of Contents1. ... Treatment6. Everyday Life7. Questions8. Resources What is a speech and language delay? A speech and language delay ...

  4. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  5. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  6. Establishing the soft and hard tissue area centers (centroids) for the skull and introducing a newnon-anatomical cephalometric line

    International Nuclear Information System (INIS)

    AlBalkhi, Khalid M; AlShahrani, Ibrahim; AlMadi, Abdulaziz

    2008-01-01

    The purpose of this study was to demonstrate how to establish the area center (centroid) of both the soft and hard tissues of the outline of the lateral cephalometric skull image, and to introduce the concept of a new non-anatomical centroid line. Lateral cephalometric radiographs, size 12 x 14 inch, of fifty seven adult subjects were selected based on their pleasant, balanced profile, Class I skeletal and dental relationship and no major dental malocclusion or malrelationship. The area centers (centroids) of both soft and hard tissue skull were practically established using a customized software computer program called the m -file . Connecting the two centers introduced the concept of a new non-anatomical soft and hard centroids line. (author)

  7. Speech-specific audiovisual perception affects identification but not detection of speech

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by the McGurk effect in which watching incongruent articulatory mouth movements can change the phonetic auditory speech percept. This type of audiovisual integration may be specific to speech or be applied to all stimuli in general. To investigate...... of audiovisual integration specific to speech perception. However, the results of Tuomainen et al. might have been influenced by another effect. When observers were naïve, they had little motivation to look at the face. When informed, they knew that the face was relevant for the task and this could increase...... visual detection task. In our first experiment, observers presented with congruent and incongruent audiovisual sine-wave speech stimuli did only show a McGurk effect when informed of the speech nature of the stimulus. Performance on the secondary visual task was very good, thus supporting the finding...

  8. Speech-Language Therapy (For Parents)

    Science.gov (United States)

    ... Staying Safe Videos for Educators Search English Español Speech-Language Therapy KidsHealth / For Parents / Speech-Language Therapy ... most kids with speech and/or language disorders. Speech Disorders, Language Disorders, and Feeding Disorders A speech ...

  9. Digital speech processing using Matlab

    CERN Document Server

    Gopi, E S

    2014-01-01

    Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.

  10. Developmental apraxia of speech in children. Quantitive assessment of speech characteristics

    NARCIS (Netherlands)

    Thoonen, G.H.J.

    1998-01-01

    Developmental apraxia of speech (DAS) in children is a speech disorder, supposed to have a neurological origin, which is commonly considered to result from particular deficits in speech processing (i.e., phonological planning, motor programming). However, the label DAS has often been used as

  11. Characterizing Intonation Deficit in Motor Speech Disorders: An Autosegmental-Metrical Analysis of Spontaneous Speech in Hypokinetic Dysarthria, Ataxic Dysarthria, and Foreign Accent Syndrome

    Science.gov (United States)

    Lowit, Anja; Kuschmann, Anja

    2012-01-01

    Purpose: The autosegmental-metrical (AM) framework represents an established methodology for intonational analysis in unimpaired speaker populations but has found little application in describing intonation in motor speech disorders (MSDs). This study compared the intonation patterns of unimpaired participants (CON) and those with Parkinson's…

  12. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems.

    Science.gov (United States)

    Greene, Beth G; Logan, John S; Pisoni, David B

    1986-03-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.

  13. Perception of synthetic speech produced automatically by rule: Intelligibility of eight text-to-speech systems

    Science.gov (United States)

    GREENE, BETH G.; LOGAN, JOHN S.; PISONI, DAVID B.

    2012-01-01

    We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered. PMID:23225916

  14. Hemispheric asymmetries in speech perception: sense, nonsense and modulations.

    Directory of Open Access Journals (Sweden)

    Stuart Rosen

    Full Text Available The well-established left hemisphere specialisation for language processing has long been claimed to be based on a low-level auditory specialization for specific acoustic features in speech, particularly regarding 'rapid temporal processing'.A novel analysis/synthesis technique was used to construct a variety of sounds based on simple sentences which could be manipulated in spectro-temporal complexity, and whether they were intelligible or not. All sounds consisted of two noise-excited spectral prominences (based on the lower two formants in the original speech which could be static or varying in frequency and/or amplitude independently. Dynamically varying both acoustic features based on the same sentence led to intelligible speech but when either or both acoustic features were static, the stimuli were not intelligible. Using the frequency dynamics from one sentence with the amplitude dynamics of another led to unintelligible sounds of comparable spectro-temporal complexity to the intelligible ones. Positron emission tomography (PET was used to compare which brain regions were active when participants listened to the different sounds.Neural activity to spectral and amplitude modulations sufficient to support speech intelligibility (without actually being intelligible was seen bilaterally, with a right temporal lobe dominance. A left dominant response was seen only to intelligible sounds. It thus appears that the left hemisphere specialisation for speech is based on the linguistic properties of utterances, not on particular acoustic features.

  15. The speech perception skills of children with and without speech sound disorder.

    Science.gov (United States)

    Hearnshaw, Stephanie; Baker, Elise; Munro, Natalie

    To investigate whether Australian-English speaking children with and without speech sound disorder (SSD) differ in their overall speech perception accuracy. Additionally, to investigate differences in the perception of specific phonemes and the association between speech perception and speech production skills. Twenty-five Australian-English speaking children aged 48-60 months participated in this study. The SSD group included 12 children and the typically developing (TD) group included 13 children. Children completed routine speech and language assessments in addition to an experimental Australian-English lexical and phonetic judgement task based on Rvachew's Speech Assessment and Interactive Learning System (SAILS) program (Rvachew, 2009). This task included eight words across four word-initial phonemes-/k, ɹ, ʃ, s/. Children with SSD showed significantly poorer perceptual accuracy on the lexical and phonetic judgement task compared with TD peers. The phonemes /ɹ/ and /s/ were most frequently perceived in error across both groups. Additionally, the phoneme /ɹ/ was most commonly produced in error. There was also a positive correlation between overall speech perception and speech production scores. Children with SSD perceived speech less accurately than their typically developing peers. The findings suggest that an Australian-English variation of a lexical and phonetic judgement task similar to the SAILS program is promising and worthy of a larger scale study. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Speech Matters

    DEFF Research Database (Denmark)

    Hasse Jørgensen, Stina

    2011-01-01

    About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011.......About Speech Matters - Katarina Gregos, the Greek curator's exhibition at the Danish Pavillion, the Venice Biannual 2011....

  17. Establishing Vocal Verbalizations in Mute Mongoloid Children.

    Science.gov (United States)

    Buddenhagen, Ronald G.

    Behavior modification as an attack upon the problem of mutism in mongoloid children establishes the basis of the text. Case histories of four children in a state institution present the specific strategy of speech therapy using verbal conditioning. Imitation and attending behavior, verbal chaining, phonetic theory, social reinforcement,…

  18. Hate speech

    Directory of Open Access Journals (Sweden)

    Anne Birgitta Nilsen

    2014-12-01

    Full Text Available The manifesto of the Norwegian terrorist Anders Behring Breivik is based on the “Eurabia” conspiracy theory. This theory is a key starting point for hate speech amongst many right-wing extremists in Europe, but also has ramifications beyond these environments. In brief, proponents of the Eurabia theory claim that Muslims are occupying Europe and destroying Western culture, with the assistance of the EU and European governments. By contrast, members of Al-Qaeda and other extreme Islamists promote the conspiracy theory “the Crusade” in their hate speech directed against the West. Proponents of the latter theory argue that the West is leading a crusade to eradicate Islam and Muslims, a crusade that is similarly facilitated by their governments. This article presents analyses of texts written by right-wing extremists and Muslim extremists in an effort to shed light on how hate speech promulgates conspiracy theories in order to spread hatred and intolerance.The aim of the article is to contribute to a more thorough understanding of hate speech’s nature by applying rhetorical analysis. Rhetorical analysis is chosen because it offers a means of understanding the persuasive power of speech. It is thus a suitable tool to describe how hate speech works to convince and persuade. The concepts from rhetorical theory used in this article are ethos, logos and pathos. The concept of ethos is used to pinpoint factors that contributed to Osama bin Laden's impact, namely factors that lent credibility to his promotion of the conspiracy theory of the Crusade. In particular, Bin Laden projected common sense, good morals and good will towards his audience. He seemed to have coherent and relevant arguments; he appeared to possess moral credibility; and his use of language demonstrated that he wanted the best for his audience.The concept of pathos is used to define hate speech, since hate speech targets its audience's emotions. In hate speech it is the

  19. Speech Inconsistency in Children with Childhood Apraxia of Speech, Language Impairment, and Speech Delay: Depends on the Stimuli

    Science.gov (United States)

    Iuzzini-Seigel, Jenya; Hogan, Tiffany P.; Green, Jordan R.

    2017-01-01

    Purpose: The current research sought to determine (a) if speech inconsistency is a core feature of childhood apraxia of speech (CAS) or if it is driven by comorbid language impairment that affects a large subset of children with CAS and (b) if speech inconsistency is a sensitive and specific diagnostic marker that can differentiate between CAS and…

  20. Beyond stuttering: Speech disfluencies in normally fluent French-speaking children at age 4.

    Science.gov (United States)

    Leclercq, Anne-Lise; Suaire, Pauline; Moyse, Astrid

    2018-01-01

    The aim of this study was to establish normative data on the speech disfluencies of normally fluent French-speaking children at age 4, an age at which stuttering has begun in 95% of children who stutter (Yairi & Ambrose, 2013). Fifty monolingual French-speaking children who do not stutter participated in the study. Analyses of a conversational speech sample comprising 250-550 words revealed an average of 10% total disfluencies, 2% stuttering-like disfluencies and around 8% non-stuttered disfluencies. Possible explanations for these high speech disfluency frequencies are discussed, including explanations linked to French in particular. The results shed light on the importance of normative data specific to each language.

  1. Clear Speech - Mere Speech? How segmental and prosodic speech reduction shape the impression that speakers create on listeners

    DEFF Research Database (Denmark)

    Niebuhr, Oliver

    2017-01-01

    of reduction levels and perceived speaker attributes in which moderate reduction can make a better impression on listeners than no reduction. In addition to its relevance in reduction models and theories, this interplay is instructive for various fields of speech application from social robotics to charisma...... whether variation in the degree of reduction also has a systematic effect on the attributes we ascribe to the speaker who produces the speech signal. A perception experiment was carried out for German in which 46 listeners judged whether or not speakers showing 3 different combinations of segmental...... and prosodic reduction levels (unreduced, moderately reduced, strongly reduced) are appropriately described by 13 physical, social, and cognitive attributes. The experiment shows that clear speech is not mere speech, and less clear speech is not just reduced either. Rather, results revealed a complex interplay...

  2. A case of Churg-Strauss syndrome: tissue diagnosis established by sigmoidoscopic rectal biopsy.

    Science.gov (United States)

    Leen, E J; Rees, P J; Sanderson, J D; Wilkinson, M L; Filipe, M I

    1996-01-01

    A case is presented of Churg-Strauss syndrome in a young man in whom the definitive diagnostic procedure was a full thickness sigmoidoscopic rectal biopsy, with submucosal sampling. Gastrointestinal changes in Churg-Strauss syndrome, a rare systemic illness characterised by asthma, blood and tissue eosinophilia, vasculitis, and granulomatous inflammation are common but poorly reported. The endoscopic and histopathological features of a case are described and emphasise the potential value of a limited sigmoidoscopy in establishing the diagnosis, when lower gastrointestinal symptoms are present. Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 Figure 6 Figure 7 PMID:8801216

  3. The role of tissue-specific microbiota in initial establishment success of Pacific oysters.

    Science.gov (United States)

    Lokmer, Ana; Kuenzel, Sven; Baines, John F; Wegner, Karl Mathias

    2016-03-01

    Microbiota can have positive and negative effects on hosts depending on the environmental conditions. Therefore, it is important to decipher host-microbiota-environment interactions, especially under natural conditions exerting (a)biotic stress. Here, we assess the relative importance of microbiota in different tissues of Pacific oyster for its successful establishment in a new environment. We transplanted oysters from the Southern to the Northern Wadden Sea and controlled for the effects of resident microbiota by administering antibiotics to half of the oysters. We then followed survival and composition of haemolymph, mantle, gill and gut microbiota in local and translocated oysters over 5 days. High mortality was recorded only in non-antibiotic-treated translocated oysters, where high titres of active Vibrio sp. in solid tissues indicated systemic infections. Network analyses revealed the highest connectivity and a link to seawater communities in the haemolymph microbiota. Since antibiotics decreased modularity and increased connectivity of the haemolymph-based networks, we propose that community destabilization in non-treated translocated oysters could be attributed to interactions between resident and external microbiota, which in turn facilitated passage of vibrios into solid tissues and invoked disease. These interactions of haemolymph microbiota with the external and internal environment may thus represent an important component of oyster fitness. © 2015 Society for Applied Microbiology and John Wiley & Sons Ltd.

  4. Under-resourced speech recognition based on the speech manifold

    CSIR Research Space (South Africa)

    Sahraeian, R

    2015-09-01

    Full Text Available Conventional acoustic modeling involves estimating many parameters to effectively model feature distributions. The sparseness of speech and text data, however, degrades the reliability of the estimation process and makes speech recognition a...

  5. PRACTICING SPEECH THERAPY INTERVENTION FOR SOCIAL INTEGRATION OF CHILDREN WITH SPEECH DISORDERS

    Directory of Open Access Journals (Sweden)

    Martin Ofelia POPESCU

    2016-11-01

    Full Text Available The article presents a concise speech correction intervention program in of dyslalia in conjunction with capacity development of intra, interpersonal and social integration of children with speech disorders. The program main objectives represent: the potential increasing of individual social integration by correcting speech disorders in conjunction with intra- and interpersonal capacity, the potential growth of children and community groups for social integration by optimizing the socio-relational context of children with speech disorder. In the program were included 60 children / students with dyslalia speech disorders (monomorphic and polymorphic dyslalia, from 11 educational institutions - 6 kindergartens and 5 schools / secondary schools, joined with inter-school logopedic centre (CLI from Targu Jiu city and areas of Gorj district. The program was implemented under the assumption that therapeutic-formative intervention to correct speech disorders and facilitate the social integration will lead, in combination with correct pronunciation disorders, to social integration optimization of children with speech disorders. The results conirm the hypothesis and gives facts about the intervention program eficiency.

  6. Real-time continuous visual biofeedback in the treatment of speech breathing disorders following childhood traumatic brain injury: report of one case.

    Science.gov (United States)

    Murdoch, B E; Pitt, G; Theodoros, D G; Ward, E C

    1999-01-01

    The efficacy of traditional and physiological biofeedback methods for modifying abnormal speech breathing patterns was investigated in a child with persistent dysarthria following severe traumatic brain injury (TBI). An A-B-A-B single-subject experimental research design was utilized to provide the subject with two exclusive periods of therapy for speech breathing, based on traditional therapy techniques and physiological biofeedback methods, respectively. Traditional therapy techniques included establishing optimal posture for speech breathing, explanation of the movement of the respiratory muscles, and a hierarchy of non-speech and speech tasks focusing on establishing an appropriate level of sub-glottal air pressure, and improving the subject's control of inhalation and exhalation. The biofeedback phase of therapy utilized variable inductance plethysmography (or Respitrace) to provide real-time, continuous visual biofeedback of ribcage circumference during breathing. As in traditional therapy, a hierarchy of non-speech and speech tasks were devised to improve the subject's control of his respiratory pattern. Throughout the project, the subject's respiratory support for speech was assessed both instrumentally and perceptually. Instrumental assessment included kinematic and spirometric measures, and perceptual assessment included the Frenchay Dysarthria Assessment, Assessment of Intelligibility of Dysarthric Speech, and analysis of a speech sample. The results of the study demonstrated that real-time continuous visual biofeedback techniques for modifying speech breathing patterns were not only effective, but superior to the traditional therapy techniques for modifying abnormal speech breathing patterns in a child with persistent dysarthria following severe TBI. These results show that physiological biofeedback techniques are potentially useful clinical tools for the remediation of speech breathing impairment in the paediatric dysarthric population.

  7. Schizophrenia alters intra-network functional connectivity in the caudate for detecting speech under informational speech masking conditions.

    Science.gov (United States)

    Zheng, Yingjun; Wu, Chao; Li, Juanhua; Li, Ruikeng; Peng, Hongjun; She, Shenglin; Ning, Yuping; Li, Liang

    2018-04-04

    Speech recognition under noisy "cocktail-party" environments involves multiple perceptual/cognitive processes, including target detection, selective attention, irrelevant signal inhibition, sensory/working memory, and speech production. Compared to health listeners, people with schizophrenia are more vulnerable to masking stimuli and perform worse in speech recognition under speech-on-speech masking conditions. Although the schizophrenia-related speech-recognition impairment under "cocktail-party" conditions is associated with deficits of various perceptual/cognitive processes, it is crucial to know whether the brain substrates critically underlying speech detection against informational speech masking are impaired in people with schizophrenia. Using functional magnetic resonance imaging (fMRI), this study investigated differences between people with schizophrenia (n = 19, mean age = 33 ± 10 years) and their matched healthy controls (n = 15, mean age = 30 ± 9 years) in intra-network functional connectivity (FC) specifically associated with target-speech detection under speech-on-speech-masking conditions. The target-speech detection performance under the speech-on-speech-masking condition in participants with schizophrenia was significantly worse than that in matched healthy participants (healthy controls). Moreover, in healthy controls, but not participants with schizophrenia, the strength of intra-network FC within the bilateral caudate was positively correlated with the speech-detection performance under the speech-masking conditions. Compared to controls, patients showed altered spatial activity pattern and decreased intra-network FC in the caudate. In people with schizophrenia, the declined speech-detection performance under speech-on-speech masking conditions is associated with reduced intra-caudate functional connectivity, which normally contributes to detecting target speech against speech masking via its functions of suppressing masking-speech signals.

  8. Speech disorder prevention

    Directory of Open Access Journals (Sweden)

    Miladis Fornaris-Méndez

    2017-04-01

    Full Text Available Language therapy has trafficked from a medical focus until a preventive focus. However, difficulties are evidenced in the development of this last task, because he is devoted bigger space to the correction of the disorders of the language. Because the speech disorders is the dysfunction with more frequently appearance, acquires special importance the preventive work that is developed to avoid its appearance. Speech education since early age of the childhood makes work easier for prevent the appearance of speech disorders in the children. The present work has as objective to offer different activities for the prevention of the speech disorders.

  9. Speech and Speech-Related Quality of Life After Late Palate Repair: A Patient's Perspective.

    Science.gov (United States)

    Schönmeyr, Björn; Wendby, Lisa; Sharma, Mitali; Jacobson, Lia; Restrepo, Carolina; Campbell, Alex

    2015-07-01

    Many patients with cleft palate deformities worldwide receive treatment at a later age than is recommended for normal speech to develop. The outcomes after late palate repairs in terms of speech and quality of life (QOL) still remain largely unstudied. In the current study, questionnaires were used to assess the patients' perception of speech and QOL before and after primary palate repair. All of the patients were operated at a cleft center in northeast India and had a cleft palate with a normal lip or with a cleft lip that had been previously repaired. A total of 134 patients (7-35 years) were interviewed preoperatively and 46 patients (7-32 years) were assessed in the postoperative survey. The survey showed that scores based on the speech handicap index, concerning speech and speech-related QOL, did not improve postoperatively. In fact, the questionnaires indicated that the speech became more unpredictable (P reported that their self-confidence had improved after the operation. Thus, the majority of interviewed patients who underwent late primary palate repair were satisfied with the surgery. At the same time, speech and speech-related QOL did not improve according to the speech handicap index-based survey. Speech predictability may even become worse and nasal regurgitation may increase after late palate repair, according to these results.

  10. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  11. Tackling the complexity in speech

    DEFF Research Database (Denmark)

    section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore......, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about...... the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations...

  12. Speech in spinocerebellar ataxia.

    Science.gov (United States)

    Schalling, Ellika; Hartelius, Lena

    2013-12-01

    Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominant cerebellar ataxias clinically characterized by progressive ataxia, dysarthria and a range of other concomitant neurological symptoms. Only a few studies include detailed characterization of speech symptoms in SCA. Speech symptoms in SCA resemble ataxic dysarthria but symptoms related to phonation may be more prominent. One study to date has shown an association between differences in speech and voice symptoms related to genotype. More studies of speech and voice phenotypes are motivated, to possibly aid in clinical diagnosis. In addition, instrumental speech analysis has been demonstrated to be a reliable measure that may be used to monitor disease progression or therapy outcomes in possible future pharmacological treatments. Intervention by speech and language pathologists should go beyond assessment. Clinical guidelines for management of speech, communication and swallowing need to be developed for individuals with progressive cerebellar ataxia. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  14. Speech rhythm facilitates syntactic ambiguity resolution: ERP evidence.

    Directory of Open Access Journals (Sweden)

    Maria Paula Roncaglia-Denissen

    Full Text Available In the current event-related potential (ERP study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first. Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing.

  15. Speech rhythm facilitates syntactic ambiguity resolution: ERP evidence.

    Science.gov (United States)

    Roncaglia-Denissen, Maria Paula; Schmidt-Kassow, Maren; Kotz, Sonja A

    2013-01-01

    In the current event-related potential (ERP) study, we investigated how speech rhythm impacts speech segmentation and facilitates the resolution of syntactic ambiguities in auditory sentence processing. Participants listened to syntactically ambiguous German subject- and object-first sentences that were spoken with either regular or irregular speech rhythm. Rhythmicity was established by a constant metric pattern of three unstressed syllables between two stressed ones that created rhythmic groups of constant size. Accuracy rates in a comprehension task revealed that participants understood rhythmically regular sentences better than rhythmically irregular ones. Furthermore, the mean amplitude of the P600 component was reduced in response to object-first sentences only when embedded in rhythmically regular but not rhythmically irregular context. This P600 reduction indicates facilitated processing of sentence structure possibly due to a decrease in processing costs for the less-preferred structure (object-first). Our data suggest an early and continuous use of rhythm by the syntactic parser and support language processing models assuming an interactive and incremental use of linguistic information during language processing.

  16. The good, the bad, and the voter: the impact of hate speech prosecution of a politician on the electoral support for his party

    NARCIS (Netherlands)

    van Spanje, J.; de Vreese, C.

    2015-01-01

    Hate speech prosecution of politicians is a common phenomenon in established democracies. Examples of politicians tried for hate speech include Nick Griffin in Britain and Jean-Marie Le Pen in France. Does hate speech prosecution of politicians affect the electoral support for their party? This is

  17. Predicting Speech Intelligibility with a Multiple Speech Subsystems Approach in Children with Cerebral Palsy

    Science.gov (United States)

    Lee, Jimin; Hustad, Katherine C.; Weismer, Gary

    2014-01-01

    Purpose: Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method: Nine acoustic variables reflecting different subsystems, and…

  18. A causal test of the motor theory of speech perception: a case of impaired speech production and spared speech perception.

    Science.gov (United States)

    Stasenko, Alena; Bonn, Cory; Teghipco, Alex; Garcea, Frank E; Sweet, Catherine; Dombovy, Mary; McDonough, Joyce; Mahon, Bradford Z

    2015-01-01

    The debate about the causal role of the motor system in speech perception has been reignited by demonstrations that motor processes are engaged during the processing of speech sounds. Here, we evaluate which aspects of auditory speech processing are affected, and which are not, in a stroke patient with dysfunction of the speech motor system. We found that the patient showed a normal phonemic categorical boundary when discriminating two non-words that differ by a minimal pair (e.g., ADA-AGA). However, using the same stimuli, the patient was unable to identify or label the non-word stimuli (using a button-press response). A control task showed that he could identify speech sounds by speaker gender, ruling out a general labelling impairment. These data suggest that while the motor system is not causally involved in perception of the speech signal, it may be used when other cues (e.g., meaning, context) are not available.

  19. The interaction between awareness of one's own speech disorder with linguistics variables: distinctive features and severity of phonological disorder.

    Science.gov (United States)

    Dias, Roberta Freitas; Melo, Roberta Michelon; Mezzomo, Carolina Lisbôa; Mota, Helena Bolli

    2013-01-01

    To analyze the possible relationship among the awareness of one's own speech disorder and some aspects of the phonological system, as the number and the type of changed distinctive features, as well as the interaction among the severity of the disorder and the non-specification of distinctive features. The analyzed group has 23 children with diagnosis of speech disorder, aged 5:0 to 7:7. The speech data were analyzed through the Distinctive Features Analysis and classified by the Percentage of Correct Consonants. One also applied the Awareness of one's own speech disorder test. The children were separated in two groups: with awareness of their own speech disorder established (more than 50% of correct identification) and without awareness of their own speech disorder established (less than 50% of correct identification). Finally, the variables of this research were submitted to analysis using descriptive and inferential statistics. The type of changed distinctive features weren't different between the groups, as well as the total of changed features and the severity disorder. However, a correlation between the severity disorder and the non-specification of distinctive features was verified, because the more severe disorders have more changes in these linguistic variables. The awareness of one's own speech disorder doesn't seem to be directly influenced by the type and by the number of changed distinctive features, neither by the speech disorder severity. Moreover, one verifies that the greater phonological disorder severity, the greater the number of changed distinctive features.

  20. The Relationship between Speech Production and Speech Perception Deficits in Parkinson's Disease

    Science.gov (United States)

    De Keyser, Kim; Santens, Patrick; Bockstael, Annelies; Botteldooren, Dick; Talsma, Durk; De Vos, Stefanie; Van Cauwenberghe, Mieke; Verheugen, Femke; Corthals, Paul; De Letter, Miet

    2016-01-01

    Purpose: This study investigated the possible relationship between hypokinetic speech production and speech intensity perception in patients with Parkinson's disease (PD). Method: Participants included 14 patients with idiopathic PD and 14 matched healthy controls (HCs) with normal hearing and cognition. First, speech production was objectified…

  1. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    Science.gov (United States)

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  2. The treatment of apraxia of speech : Speech and music therapy, an innovative joint effort

    NARCIS (Netherlands)

    Hurkmans, Josephus Johannes Stephanus

    2016-01-01

    Apraxia of Speech (AoS) is a neurogenic speech disorder. A wide variety of behavioural methods have been developed to treat AoS. Various therapy programmes use musical elements to improve speech production. A unique therapy programme combining elements of speech therapy and music therapy is called

  3. Practical speech user interface design

    CERN Document Server

    Lewis, James R

    2010-01-01

    Although speech is the most natural form of communication between humans, most people find using speech to communicate with machines anything but natural. Drawing from psychology, human-computer interaction, linguistics, and communication theory, Practical Speech User Interface Design provides a comprehensive yet concise survey of practical speech user interface (SUI) design. It offers practice-based and research-based guidance on how to design effective, efficient, and pleasant speech applications that people can really use. Focusing on the design of speech user interfaces for IVR application

  4. Motor Speech Phenotypes of Frontotemporal Dementia, Primary Progressive Aphasia, and Progressive Apraxia of Speech

    Science.gov (United States)

    Poole, Matthew L.; Brodtmann, Amy; Darby, David; Vogel, Adam P.

    2017-01-01

    Purpose: Our purpose was to create a comprehensive review of speech impairment in frontotemporal dementia (FTD), primary progressive aphasia (PPA), and progressive apraxia of speech in order to identify the most effective measures for diagnosis and monitoring, and to elucidate associations between speech and neuroimaging. Method: Speech and…

  5. An analysis of the masking of speech by competing speech using self-report data.

    Science.gov (United States)

    Agus, Trevor R; Akeroyd, Michael A; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the "Speech, Spatial, and Qualities of Hearing" scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol. 43, 85-99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

  6. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  7. An investigation of co-speech gesture production during action description in Parkinson's disease.

    Science.gov (United States)

    Cleary, Rebecca A; Poliakoff, Ellen; Galpin, Adam; Dick, Jeremy P R; Holler, Judith

    2011-12-01

    Parkinson's disease (PD) can impact enormously on speech communication. One aspect of non-verbal behaviour closely tied to speech is co-speech gesture production. In healthy people, co-speech gestures can add significant meaning and emphasis to speech. There is, however, little research into how this important channel of communication is affected in PD. The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Acoustic and temporal analysis of speech: A potential biomarker for schizophrenia.

    LENUS (Irish Health Repository)

    Rapcan, Viliam

    2010-11-01

    Currently, there are no established objective biomarkers for the diagnosis or monitoring of schizophrenia. It has been previously reported that there are notable qualitative differences in the speech of schizophrenics. The objective of this study was to determine whether a quantitative acoustic and temporal analysis of speech may be a potential biomarker for schizophrenia. In this study, 39 schizophrenic patients and 18 controls were digitally recorded reading aloud an emotionally neutral text passage from a children\\'s story. Temporal, energy and vocal pitch features were automatically extracted from the recordings. A classifier based on linear discriminant analysis was employed to differentiate between controls and schizophrenic subjects. Processing the recordings with the algorithm developed demonstrated that it is possible to differentiate schizophrenic patients and controls with a classification accuracy of 79.4% (specificity=83.6%, sensitivity=75.2%) based on speech pause related parameters extracted from recordings carried out in standard office (non-studio) environments. Acoustic and temporal analysis of speech may represent a potential tool for the objective analysis in schizophrenia.

  9. Part-of-speech effects on text-to-speech synthesis

    CSIR Research Space (South Africa)

    Schlunz, GI

    2010-11-01

    Full Text Available One of the goals of text-to-speech (TTS) systems is to produce natural-sounding synthesised speech. Towards this end various natural language processing (NLP) tasks are performed to model the prosodic aspects of the TTS voice. One of the fundamental...

  10. Audiovisual speech perception at various presentation levels in Mandarin-speaking adults with cochlear implants.

    Directory of Open Access Journals (Sweden)

    Shu-Yu Liu

    Full Text Available (1 To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV and auditory-only (AO modes in Mandarin-speaking adults with cochlear implants (CIs; (2 to understand the effect of presentation levels on AV speech perception; (3 to learn the effect of hearing experience on AV speech perception.Thirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female who had used CIs for >6 months and 10 normal-hearing (NH adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT, speech recognition threshold (SRT and 10 dB SL (re:SRT.The prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016, and so did the NH group at SDT (p = 0.004. Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones but were outperformed by the NH group at 10 dB SL (re:SRT in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones. The recognition scores had a significant correlation with group with age and sex controlled (p<0.001.Visual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.

  11. Invocations, Benedictions, and Freedom of Speech in Public Schools.

    Science.gov (United States)

    Harris, Phillip H.

    1991-01-01

    The Supreme Court, in an upcoming case "Lee v. Weisman," will rule on whether prayer may be offered out loud at a public school graduation program. Argues that past court decisions have interpreted the Establishment Clause of the First Amendment over the Free Speech Clause of that same amendment. (57 references) (MLF)

  12. Re-establishing marshes can return carbon sink functions to a current carbon source in the Sacramento-San Joaquin Delta of California, USA

    Science.gov (United States)

    Miller, Robin L.; Fujii, Roger; Schmidt, Paul E.

    2011-01-01

    The Sacramento-San Joaquin Delta in California was an historic, vast inland freshwater wetland, where organic soils almost 20 meters deep formed over the last several millennia as the land surface elevation of marshes kept pace with sea level rise. A system of levees and pumps were installed in the late 1800s and early 1900s to drain the land for agricultural use. Since then, land surface has subsided more than 7 meters below sea level in some areas as organic soils have been lost to aerobic decomposition. As land surface elevations decrease, costs for levee maintenance and repair increase, as do the risks of flooding. Wetland restoration can be a way to mitigate subsidence by re-creating the environment in which the organic soils developed. A preliminary study of the effect of hydrologic regime on carbon cycling conducted on Twitchell Island during the mid-1990s showed that continuous, shallow flooding allowing for the growth of emergent marsh vegetation re-created a wetland environment where carbon preservation occurred. Under these conditions annual plant biomass carbon inputs were high, and microbial decomposition was reduced. Based on this preliminary study, the U.S. Geological Survey re-established permanently flooded wetlands in fall 1997, with shallow water depths of 25 and 55 centimeters, to investigate the potential to reverse subsidence of delta islands by preserving and accumulating organic substrates over time. Ten years after flooding, elevation gains from organic matter accumulation in areas of emergent marsh vegetation ranged from almost 30 to 60 centimeters, with average annual carbon storage rates approximating 1 kg/m2, while areas without emergent vegetation cover showed no significant change in elevation. Differences in accretion rates within areas of emergent marsh vegetation appeared to result from temporal and spatial variability in hydrologic factors and decomposition rates in the wetlands rather than variability in primary production

  13. 75 FR 26701 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-05-12

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... proposed compensation rates for Interstate TRS, Speech-to-Speech Services (STS), Captioned Telephone... costs reported in the data submitted to NECA by VRS providers. In this regard, document DA 10-761 also...

  14. Predicting automatic speech recognition performance over communication channels from instrumental speech quality and intelligibility scores

    NARCIS (Netherlands)

    Gallardo, L.F.; Möller, S.; Beerends, J.

    2017-01-01

    The performance of automatic speech recognition based on coded-decoded speech heavily depends on the quality of the transmitted signals, determined by channel impairments. This paper examines relationships between speech recognition performance and measurements of speech quality and intelligibility

  15. Structural brain aging and speech production: a surface-based brain morphometry study.

    Science.gov (United States)

    Tremblay, Pascale; Deschamps, Isabelle

    2016-07-01

    While there has been a growing number of studies examining the neurofunctional correlates of speech production over the past decade, the neurostructural correlates of this immensely important human behaviour remain less well understood, despite the fact that previous studies have established links between brain structure and behaviour, including speech and language. In the present study, we thus examined, for the first time, the relationship between surface-based cortical thickness (CT) and three different behavioural indexes of sublexical speech production: response duration, reaction times and articulatory accuracy, in healthy young and older adults during the production of simple and complex meaningless sequences of syllables (e.g., /pa-pa-pa/ vs. /pa-ta-ka/). The results show that each behavioural speech measure was sensitive to the complexity of the sequences, as indicated by slower reaction times, longer response durations and decreased articulatory accuracy in both groups for the complex sequences. Older adults produced longer speech responses, particularly during the production of complex sequence. Unique age-independent and age-dependent relationships between brain structure and each of these behavioural measures were found in several cortical and subcortical regions known for their involvement in speech production, including the bilateral anterior insula, the left primary motor area, the rostral supramarginal gyrus, the right inferior frontal sulcus, the bilateral putamen and caudate, and in some region less typically associated with speech production, such as the posterior cingulate cortex.

  16. [Non-speech oral motor treatment efficacy for children with developmental speech sound disorders].

    Science.gov (United States)

    Ygual-Fernandez, A; Cervera-Merida, J F

    2016-01-01

    In the treatment of speech disorders by means of speech therapy two antagonistic methodological approaches are applied: non-verbal ones, based on oral motor exercises (OME), and verbal ones, which are based on speech processing tasks with syllables, phonemes and words. In Spain, OME programmes are called 'programas de praxias', and are widely used and valued by speech therapists. To review the studies conducted on the effectiveness of OME-based treatments applied to children with speech disorders and the theoretical arguments that could justify, or not, their usefulness. Over the last few decades evidence has been gathered about the lack of efficacy of this approach to treat developmental speech disorders and pronunciation problems in populations without any neurological alteration of motor functioning. The American Speech-Language-Hearing Association has advised against its use taking into account the principles of evidence-based practice. The knowledge gathered to date on motor control shows that the pattern of mobility and its corresponding organisation in the brain are different in speech and other non-verbal functions linked to nutrition and breathing. Neither the studies on their effectiveness nor the arguments based on motor control studies recommend the use of OME-based programmes for the treatment of pronunciation problems in children with developmental language disorders.

  17. India RE Grid Integration Study

    Energy Technology Data Exchange (ETDEWEB)

    Cochran, Jaquelin M [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-08

    The use of renewable energy (RE) sources, primarily wind and solar generation, is poised to grow significantly within the Indian power system. The Government of India has established a target of 175 gigawatts (GW) of installed RE capacity by 2022, including 60 GW of wind and 100 GW of solar, up from 29 GW wind and 9 GW solar at the beginning of 2017. Thanks to advanced weather and power system modeling made for this project, the study team is able to explore operational impacts of meeting India's RE targets and identify actions that may be favorable for integration.

  18. 75 FR 54040 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2010-09-03

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities...; speech-to-speech (STS); pay-per-call (900) calls; types of calls; and equal access to interexchange... of a report, due April 16, 2011, addressing whether it is necessary for the waivers to remain in...

  19. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    Science.gov (United States)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  20. Environmental Contamination of Normal Speech.

    Science.gov (United States)

    Harley, Trevor A.

    1990-01-01

    Environmentally contaminated speech errors (irrelevant words or phrases derived from the speaker's environment and erroneously incorporated into speech) are hypothesized to occur at a high level of speech processing, but with a relatively late insertion point. The data indicate that speech production processes are not independent of other…

  1. Emotionally conditioning the target-speech voice enhances recognition of the target speech under "cocktail-party" listening conditions.

    Science.gov (United States)

    Lu, Lingxi; Bao, Xiaohan; Chen, Jing; Qu, Tianshu; Wu, Xihong; Li, Liang

    2018-05-01

    Under a noisy "cocktail-party" listening condition with multiple people talking, listeners can use various perceptual/cognitive unmasking cues to improve recognition of the target speech against informational speech-on-speech masking. One potential unmasking cue is the emotion expressed in a speech voice, by means of certain acoustical features. However, it was unclear whether emotionally conditioning a target-speech voice that has none of the typical acoustical features of emotions (i.e., an emotionally neutral voice) can be used by listeners for enhancing target-speech recognition under speech-on-speech masking conditions. In this study we examined the recognition of target speech against a two-talker speech masker both before and after the emotionally neutral target voice was paired with a loud female screaming sound that has a marked negative emotional valence. The results showed that recognition of the target speech (especially the first keyword in a target sentence) was significantly improved by emotionally conditioning the target speaker's voice. Moreover, the emotional unmasking effect was independent of the unmasking effect of the perceived spatial separation between the target speech and the masker. Also, (skin conductance) electrodermal responses became stronger after emotional learning when the target speech and masker were perceptually co-located, suggesting an increase of listening efforts when the target speech was informationally masked. These results indicate that emotionally conditioning the target speaker's voice does not change the acoustical parameters of the target-speech stimuli, but the emotionally conditioned vocal features can be used as cues for unmasking target speech.

  2. Multilevel Analysis in Analyzing Speech Data

    Science.gov (United States)

    Guddattu, Vasudeva; Krishna, Y.

    2011-01-01

    The speech produced by human vocal tract is a complex acoustic signal, with diverse applications in phonetics, speech synthesis, automatic speech recognition, speaker identification, communication aids, speech pathology, speech perception, machine translation, hearing research, rehabilitation and assessment of communication disorders and many…

  3. Perceived Liveliness and Speech Comprehensibility in Aphasia: The Effects of Direct Speech in Auditory Narratives

    Science.gov (United States)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in "healthy" communication direct speech constructions contribute to the liveliness, and indirectly to the comprehensibility, of speech.…

  4. Speech Enhancement by MAP Spectral Amplitude Estimation Using a Super-Gaussian Speech Model

    Directory of Open Access Journals (Sweden)

    Lotter Thomas

    2005-01-01

    Full Text Available This contribution presents two spectral amplitude estimators for acoustical background noise suppression based on maximum a posteriori estimation and super-Gaussian statistical modelling of the speech DFT amplitudes. The probability density function of the speech spectral amplitude is modelled with a simple parametric function, which allows a high approximation accuracy for Laplace- or Gamma-distributed real and imaginary parts of the speech DFT coefficients. Also, the statistical model can be adapted to optimally fit the distribution of the speech spectral amplitudes for a specific noise reduction system. Based on the super-Gaussian statistical model, computationally efficient maximum a posteriori speech estimators are derived, which outperform the commonly applied Ephraim-Malah algorithm.

  5. Exploring the role of brain oscillations in speech perception in noise: Intelligibility of isochronously retimed speech

    Directory of Open Access Journals (Sweden)

    Vincent Aubanel

    2016-08-01

    Full Text Available A growing body of evidence shows that brain oscillations track speech. This mechanism is thought to maximise processing efficiency by allocating resources to important speech information, effectively parsing speech into units of appropriate granularity for further decoding. However, some aspects of this mechanism remain unclear. First, while periodicity is an intrinsic property of this physiological mechanism, speech is only quasi-periodic, so it is not clear whether periodicity would present an advantage in processing. Second, it is still a matter of debate which aspect of speech triggers or maintains cortical entrainment, from bottom-up cues such as fluctuations of the amplitude envelope of speech to higher level linguistic cues such as syntactic structure. We present data from a behavioural experiment assessing the effect of isochronous retiming of speech on speech perception in noise. Two types of anchor points were defined for retiming speech, namely syllable onsets and amplitude envelope peaks. For each anchor point type, retiming was implemented at two hierarchical levels, a slow time scale around 2.5 Hz and a fast time scale around 4 Hz. Results show that while any temporal distortion resulted in reduced speech intelligibility, isochronous speech anchored to P-centers (approximated by stressed syllable vowel onsets was significantly more intelligible than a matched anisochronous retiming, suggesting a facilitative role of periodicity defined on linguistically motivated units in processing speech in noise.

  6. Ear, Hearing and Speech

    DEFF Research Database (Denmark)

    Poulsen, Torben

    2000-01-01

    An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)......An introduction is given to the the anatomy and the function of the ear, basic psychoacoustic matters (hearing threshold, loudness, masking), the speech signal and speech intelligibility. The lecture note is written for the course: Fundamentals of Acoustics and Noise Control (51001)...

  7. Re-establishing safer medical-circumcision-integrated initiation ceremonies for HIV prevention in a rural setting in Papua New Guinea. A multi-method acceptability study.

    Directory of Open Access Journals (Sweden)

    Clement Morris Manineng

    Full Text Available Efforts to stem the spread of Human Immunodeficiency Virus (HIV in Papua New Guinea (PNG are hampered by multiple interrelated factors including limited health services, extreme diversities in culture and language and highly prevalent gender inequity, domestic violence and poverty. In the rural district of Yangoru-Saussia, a revival of previously ceased male initiation ceremonies (MICs is being considered for a comprehensive approach to HIV prevention. In this study, we explore the local acceptability of this undertaking including replacing traditional penile cutting practices with medical male circumcision (MMC.A multi-method study comprising three phases. Phase one, focus group discussions with male elders to explore locally appropriate approaches to HIV prevention; Phase two, interviews and a cross-sectional survey with community men and women to assess views on MICs that include MMC for HIV prevention; Phase three, interviews with cultural leaders and a cross sectional survey to assess the acceptability of replacing traditional penile bleeding with MMC.Cultural leaders expressed that re-establishing MICs was locally appropriate for HIV prevention given the focus on character building and cultural preservation. Most surveyed participants (81.5% supported re-establishing MICs and 92.2% supported adapting MICs with MMC. Changes to penile bleeding emerged as a contentious and contested issue given its cultural significance in symbolizing initiates' transition from childhood to adulthood. Participants were concerned about potential clash with modern education, introduced religious beliefs and limited government support in leadership and funding.Most people in this study in Yangoru-Saussia support re-establishing MICs and replacing traditional penile bleeding with MMC. This culturally-sensitive alignment of MMC (and HIV prevention with revived MICs responds to a national health priority in PNG and acts as an example of providing culturally

  8. Music expertise shapes audiovisual temporal integration windows for speech, sinewave speech and music

    Directory of Open Access Journals (Sweden)

    Hwee Ling eLee

    2014-08-01

    Full Text Available This psychophysics study used musicians as a model to investigate whether musical expertise shapes the temporal integration window for audiovisual speech, sinewave speech or music. Musicians and non-musicians judged the audiovisual synchrony of speech, sinewave analogues of speech, and music stimuli at 13 audiovisual stimulus onset asynchronies (±360, ±300 ±240, ±180, ±120, ±60, and 0 ms. Further, we manipulated the duration of the stimuli by presenting sentences/melodies or syllables/tones. Critically, musicians relative to non-musicians exhibited significantly narrower temporal integration windows for both music and sinewave speech. Further, the temporal integration window for music decreased with the amount of music practice, but not with age of acquisition. In other words, the more musicians practiced piano in the past three years, the more sensitive they became to the temporal misalignment of visual and auditory signals. Collectively, our findings demonstrate that music practicing fine-tunes the audiovisual temporal integration window to various extents depending on the stimulus class. While the effect of piano practicing was most pronounced for music, it also generalized to other stimulus classes such as sinewave speech and to a marginally significant degree to natural speech.

  9. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Ethical tissue: a not-for-profit model for human tissue supply.

    Science.gov (United States)

    Adams, Kevin; Martin, Sandie

    2011-02-01

    Following legislative changes in 2004 and the establishment of the Human Tissue Authority, access to human tissues for biomedical research became a more onerous and tightly regulated process. Ethical Tissue was established to meet the growing demand for human tissues, using a process that provided ease of access by researchers whilst maintaining the highest ethical and regulatory standards. The establishment of a licensed research tissue bank entailed several key criteria covering ethical, legal, financial and logistical issues being met. A wide range of stakeholders, including the HTA, University of Bradford, flagged LREC, hospital trusts and clinical groups were also integral to the process.

  11. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  12. Poor Speech Perception Is Not a Core Deficit of Childhood Apraxia of Speech: Preliminary Findings

    Science.gov (United States)

    Zuk, Jennifer; Iuzzini-Seigel, Jenya; Cabbage, Kathryn; Green, Jordan R.; Hogan, Tiffany P.

    2018-01-01

    Purpose: Childhood apraxia of speech (CAS) is hypothesized to arise from deficits in speech motor planning and programming, but the influence of abnormal speech perception in CAS on these processes is debated. This study examined speech perception abilities among children with CAS with and without language impairment compared to those with…

  13. Cross-Cultural Variation of Politeness Orientation & Speech Act Perception

    Directory of Open Access Journals (Sweden)

    Nisreen Naji Al-Khawaldeh

    2013-05-01

    Full Text Available This paper presents the findings of an empirical study which compares Jordanian and English native speakers’ perceptions about the speech act of thanking. The forty interviews conducted revealed some similarities but also of remarkable cross-cultural differences relating to the significance of thanking, the variables affecting it, and the appropriate linguistic and paralinguistic choices, as well as their impact on the interpretation of thanking behaviour. The most important theoretical finding is that the data, while consistent with many views found in the existing literature, do not support Brown and Levinson’s (1987 claim that thanking is a speech act which intrinsically threatens the speaker’s negative face because it involves overt acceptance of an imposition on the speaker.  Rather, thanking should be viewed as a means of establishing and sustaining social relationships. The study findings suggest that cultural variation in thanking is due to the high degree of sensitivity of this speech act to the complex interplay of a range of social and contextual variables, and point to some promising directions for further research.

  14. Biocrust re-establishment trials demonstrate beneficial prospects for mine site rehabilitation in semi-arid landscapes of Australia

    Science.gov (United States)

    Williams, Wendy; Williams, Stephen; Galea, Vic

    2015-04-01

    Biocrusts live at the interface between the atmosphere and the soil; powered by photosynthesis they strongly influence a range of soil micro-processes. At Jacinth-Ambrosia mine site, on the edge of the Nullarbor Plain (South Australia), biocrusts are a significant component of the semi-arid soil ecosystem and comprised mainly of cyanobacteria, lichens and mosses. Cyanobacteria directly contribute to soil surface stabilisation, regulation of soil moisture and, provide a biogeochemical pathway for carbon and nitrogen fertilisation. Following disturbance, rehabilitation processes are underpinned by early soil stabilisation that can be facilitated by physical crusts or bio-active crusts in which cyanobacteria are ideal soil surface colonisers. Biocrust growth trials were carried out in autumn and winter (2012) to test the re-establishment phases of highly disturbed topsoil associated with mine site operations. The substrate material originated from shallow calcareous sandy loam typically found in chenopod shrublands. The biocrust-rich substrates (1-5 cm) were crushed (biocrush) or fine sieved followed by an application of concentrated cyanobacterial inoculum. Each treatment comprised four replicated plots that were natural or moisture assisted (using subsurface mats). After initial saturation equal amounts of water were applied for 30 days at which time half of all of the plots were enclosed with plastic to increase humidity. From 30-60 days water was added as required and from 60-180 days all treatments were uncovered and subjected periodic wet-dry cycles. At 180 days diverse biocrusts had re-established across the majority of the treatments, incorporating a mix of cyanobacterial functional groups that were adapted to surface and subsurface habitats. There were no clear trends in diversity and abundance. Overall, the moisture assisted biocrush and sieved biocrush appeared to have 80% cyanobacterial diversity in common. Differences were found between the surface and

  15. Speech-language therapy for adolescents with written-language difficulties: The South African context

    Directory of Open Access Journals (Sweden)

    Danel Erasmus

    2013-11-01

    Method: A survey study was conducted, using a self-administered questionnaire. Twenty-two currently practising speech-language therapists who are registered members of the South African Speech-Language-Hearing Association (SASLHA participated in the study. Results: The respondents indicated that they are aware of their role regarding adolescents with written-language difficulties. However, they feel that South-African speech-language therapists are not fulfilling this role. Existing assessment tools and interventions for written-language difficulties are described as inadequate, and culturally and age inappropriate. Yet, the majority of the respondents feel that they are adequately equipped to work with adolescents with written-language difficulties, based on their own experience, self-study and secondary training. The respondents feel that training regarding effective collaboration with teachers is necessary to establish specific roles, and to promote speech-language therapy for adolescents among teachers. Conclusion: Further research is needed in developing appropriate assessment and intervention tools as well as improvement of training at an undergraduate level.

  16. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  17. The Neural Bases of Difficult Speech Comprehension and Speech Production: Two Activation Likelihood Estimation (ALE) Meta-Analyses

    Science.gov (United States)

    Adank, Patti

    2012-01-01

    The role of speech production mechanisms in difficult speech comprehension is the subject of on-going debate in speech science. Two Activation Likelihood Estimation (ALE) analyses were conducted on neuroimaging studies investigating difficult speech comprehension or speech production. Meta-analysis 1 included 10 studies contrasting comprehension…

  18. Puberty is an important developmental period for the establishment of adipose tissue mass and metabolic homeostasis.

    Science.gov (United States)

    Holtrup, Brandon; Church, Christopher D; Berry, Ryan; Colman, Laura; Jeffery, Elise; Bober, Jeremy; Rodeheffer, Matthew S

    2017-07-03

    Over the past 2 decades, the incidence of childhood obesity has risen dramatically. This recent rise in childhood obesity is particularly concerning as adults who were obese during childhood develop type II diabetes that is intractable to current forms of treatment compared with individuals who develop obesity in adulthood. While the mechanisms responsible for the exacerbated diabetic phenotype associated with childhood obesity is not clear, it is well known that childhood is an important time period for the establishment of normal white adipose tissue in humans. This association suggests that exposure to obesogenic stimuli during adipose development may have detrimental effects on adipose function and metabolic homeostasis. In this study, we identify the period of development associated with puberty, postnatal days 18-34, as critical for the establishment of normal adipose mass in mice. Exposure of mice to high fat diet only during this time period results in metabolic dysfunction, increased leptin expression, and increased adipocyte size in adulthood in the absence of sustained increased fat mass or body weight. These findings indicate that exposure to obesogenic stimuli during critical developmental periods have prolonged effects on adipose tissue function that may contribute to the exacerbated metabolic dysfunctions associated with childhood obesity.

  19. Metaheuristic applications to speech enhancement

    CERN Document Server

    Kunche, Prajna

    2016-01-01

    This book serves as a basic reference for those interested in the application of metaheuristics to speech enhancement. The major goal of the book is to explain the basic concepts of optimization methods and their use in heuristic optimization in speech enhancement to scientists, practicing engineers, and academic researchers in speech processing. The authors discuss why it has been a challenging problem for researchers to develop new enhancement algorithms that aid in the quality and intelligibility of degraded speech. They present powerful optimization methods to speech enhancement that can help to solve the noise reduction problems. Readers will be able to understand the fundamentals of speech processing as well as the optimization techniques, how the speech enhancement algorithms are implemented by utilizing optimization methods, and will be given the tools to develop new algorithms. The authors also provide a comprehensive literature survey regarding the topic.

  20. Systematic Studies of Modified Vocalization: The Effect of Speech Rate on Speech Production Measures during Metronome-Paced Speech in Persons Who Stutter

    Science.gov (United States)

    Davidow, Jason H.

    2014-01-01

    Background: Metronome-paced speech results in the elimination, or substantial reduction, of stuttering moments. The cause of fluency during this fluency-inducing condition is unknown. Several investigations have reported changes in speech pattern characteristics from a control condition to a metronome-paced speech condition, but failure to control…

  1. TongueToSpeech (TTS): Wearable wireless assistive device for augmented speech.

    Science.gov (United States)

    Marjanovic, Nicholas; Piccinini, Giacomo; Kerr, Kevin; Esmailbeigi, Hananeh

    2017-07-01

    Speech is an important aspect of human communication; individuals with speech impairment are unable to communicate vocally in real time. Our team has developed the TongueToSpeech (TTS) device with the goal of augmenting speech communication for the vocally impaired. The proposed device is a wearable wireless assistive device that incorporates a capacitive touch keyboard interface embedded inside a discrete retainer. This device connects to a computer, tablet or a smartphone via Bluetooth connection. The developed TTS application converts text typed by the tongue into audible speech. Our studies have concluded that an 8-contact point configuration between the tongue and the TTS device would yield the best user precision and speed performance. On average using the TTS device inside the oral cavity takes 2.5 times longer than the pointer finger using a T9 (Text on 9 keys) keyboard configuration to type the same phrase. In conclusion, we have developed a discrete noninvasive wearable device that allows the vocally impaired individuals to communicate in real time.

  2. Social eye gaze modulates processing of speech and co-speech gesture.

    Science.gov (United States)

    Holler, Judith; Schubotz, Louise; Kelly, Spencer; Hagoort, Peter; Schuetze, Manuela; Özyürek, Aslı

    2014-12-01

    In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech+gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker's preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients' speech processing suffers, gestures can enhance the comprehension of a speaker's message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Electrophysiological evidence for speech-specific audiovisual integration.

    Science.gov (United States)

    Baart, Martijn; Stekelenburg, Jeroen J; Vroomen, Jean

    2014-01-01

    Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200 ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Published by Elsevier Ltd.

  4. Leadership processes for re-engineering changes to the health care industry.

    Science.gov (United States)

    Guo, Kristina L

    2004-01-01

    As health care organizations seek innovative ways to change financing and delivery mechanisms due to escalated health care costs and increased competition, drastic changes are being sought in the form of re-engineering. This study discusses the leader's role of re-engineering in health care. It specifically addresses the reasons for failures in re-engineering and argues that success depends on senior level leaders playing a critical role. Existing studies lack comprehensiveness in establishing models of re-engineering and management guidelines. This research focuses on integrating re-engineering and leadership processes in health care by creating a step-by-step model. Particularly, it illustrates the four Es: Examination, Establishment, Execution and Evaluation, as a comprehensive re-engineering process that combines managerial roles and activities to result in successfully changed and reengineered health care organizations.

  5. Free Speech Yearbook 1978.

    Science.gov (United States)

    Phifer, Gregg, Ed.

    The 17 articles in this collection deal with theoretical and practical freedom of speech issues. The topics include: freedom of speech in Marquette Park, Illinois; Nazis in Skokie, Illinois; freedom of expression in the Confederate States of America; Robert M. LaFollette's arguments for free speech and the rights of Congress; the United States…

  6. Visual context enhanced. The joint contribution of iconic gestures and visible speech to degraded speech comprehension.

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech

  7. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  8. Nonsurgical management of soft tissue around the restorations of maxillary anterior implants: a clinical report

    Directory of Open Access Journals (Sweden)

    Seyedan K

    2010-01-01

    Full Text Available "nBackground and Aims: Soft tissue management with providing the esthetic for restoration of a single implant in the anterior maxilla is of great importance. Tissue training helps to develop a proper emergence profile and natural tooth appearance. The aim of this article was to report a nonsurgical management of undesirable contours of soft tissue around maxillary anterior implants to achieve an optimum appearance. "nMaterials and Methods: A 23-year-old female with congenital missing of maxillary lateral incisors, after completion of a fixed orthodontic treatment and gain enough space, received 2 dental implants. After second phase surgery and healing period, construction of the restorations was not possible through conventional method because of severe soft tissue collapse. In this case, soft tissue contours were corrected using a provisional restoration and then final restoration was made and delivered. "nConclusion: Tissue training with a provisional restoration helps to re-establish normal gingival tissue contours and interdental papillae around the restoration of maxillary anterior implants.

  9. Neurophysiological Evidence That Musical Training Influences the Recruitment of Right Hemispheric Homologues for Speech Perception

    Directory of Open Access Journals (Sweden)

    McNeel Gordon Jantzen

    2014-03-01

    Full Text Available Musicians have a more accurate temporal and tonal representation of auditory stimuli than their non-musician counterparts (Kraus & Chandrasekaran, 2010; Parbery-Clark, Skoe, & Kraus, 2009; Zendel & Alain, 2008; Musacchia, Sams, Skoe, & Kraus, 2007. Musicians who are adept at the production and perception of music are also more sensitive to key acoustic features of speech such as voice onset timing and pitch. Together, these data suggest that musical training may enhance the processing of acoustic information for speech sounds. In the current study, we sought to provide neural evidence that musicians process speech and music in a similar way. We hypothesized that for musicians, right hemisphere areas traditionally associated with music are also engaged for the processing of speech sounds. In contrast we predicted that in non-musicians processing of speech sounds would be localized to traditional left hemisphere language areas. Speech stimuli differing in voice onset time was presented using a dichotic listening paradigm. Subjects either indicated aural location for a specified speech sound or identified a specific speech sound from a directed aural location. Musical training effects and organization of acoustic features were reflected by activity in source generators of the P50. This included greater activation of right middle temporal gyrus (MTG and superior temporal gyrus (STG in musicians. The findings demonstrate recruitment of right hemisphere in musicians for discriminating speech sounds and a putative broadening of their language network. Musicians appear to have an increased sensitivity to acoustic features and enhanced selective attention to temporal features of speech that is facilitated by musical training and supported, in part, by right hemisphere homologues of established speech processing regions of the brain.

  10. Multisensory integration of speech sounds with letters vs. visual speech : only visual speech induces the mismatch negativity

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Keetels, M.N.; Vroomen, J.H.M.

    2018-01-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect.

  11. Speech Research

    Science.gov (United States)

    Several articles addressing topics in speech research are presented. The topics include: exploring the functional significance of physiological tremor: A biospectroscopic approach; differences between experienced and inexperienced listeners to deaf speech; a language-oriented view of reading and its disabilities; Phonetic factors in letter detection; categorical perception; Short-term recall by deaf signers of American sign language; a common basis for auditory sensory storage in perception and immediate memory; phonological awareness and verbal short-term memory; initiation versus execution time during manual and oral counting by stutterers; trading relations in the perception of speech by five-year-old children; the role of the strap muscles in pitch lowering; phonetic validation of distinctive features; consonants and syllable boundaires; and vowel information in postvocalic frictions.

  12. Development of pharmaceuticals with radioactive rhenium for cancer therapy. Production of {sup 186}Re and {sup 188}Re, synthesis of labeled compounds and their biodistributions

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    Production of the radioactive rhenium isotopes {sup 186}Re and {sup 188}Re, and synthesis of their labeled compounds have been studied together with the biodistributions of the compounds. This work was carried out by the Working Group on Radioactive Rhenium, consisting of researchers of JAERI and some universities, in the Subcommittee for Production and Radiolabeling under the Consultative Committee of Research on Radioisotopes. For {sup 186}Re, production methods by the {sup 185}Re(n,{gamma}){sup 186}Re reaction in a reactor and by the {sup 186}W(p,n){sup 186}Re reaction with an accelerator, which can produce nocarrier-added {sup 186}Re, have been established. For {sup 188}Re, a production method by the double neutron capture reaction of {sup 186}W, which produces a {sup 188}W/{sup 188}Re generator, has been established. For labeling of bisphosphonate, DMSA, DTPA, DADS, aminomethylenephosphonate and some monoclonal antibodies with the radioactive rhenium isotopes, the optimum conditions, including pH, the amounts of reagents and so on, have been determined for each compound. The biodistributions of each of the labeled compounds in mice have been also obtained. (author)

  13. Represented Speech in Qualitative Health Research

    DEFF Research Database (Denmark)

    Musaeus, Peter

    2017-01-01

    Represented speech refers to speech where we reference somebody. Represented speech is an important phenomenon in everyday conversation, health care communication, and qualitative research. This case will draw first from a case study on physicians’ workplace learning and second from a case study...... on nurses’ apprenticeship learning. The aim of the case is to guide the qualitative researcher to use own and others’ voices in the interview and to be sensitive to represented speech in everyday conversation. Moreover, reported speech matters to health professionals who aim to represent the voice...... of their patients. Qualitative researchers and students might learn to encourage interviewees to elaborate different voices or perspectives. Qualitative researchers working with natural speech might pay attention to how people talk and use represented speech. Finally, represented speech might be relevant...

  14. Spectral integration in speech and non-speech sounds

    Science.gov (United States)

    Jacewicz, Ewa

    2005-04-01

    Spectral integration (or formant averaging) was proposed in vowel perception research to account for the observation that a reduction of the intensity of one of two closely spaced formants (as in /u/) produced a predictable shift in vowel quality [Delattre et al., Word 8, 195-210 (1952)]. A related observation was reported in psychoacoustics, indicating that when the components of a two-tone periodic complex differ in amplitude and frequency, its perceived pitch is shifted toward that of the more intense tone [Helmholtz, App. XIV (1875/1948)]. Subsequent research in both fields focused on the frequency interval that separates these two spectral components, in an attempt to determine the size of the bandwidth for spectral integration to occur. This talk will review the accumulated evidence for and against spectral integration within the hypothesized limit of 3.5 Bark for static and dynamic signals in speech perception and psychoacoustics. Based on similarities in the processing of speech and non-speech sounds, it is suggested that spectral integration may reflect a general property of the auditory system. A larger frequency bandwidth, possibly close to 3.5 Bark, may be utilized in integrating acoustic information, including speech, complex signals, or sound quality of a violin.

  15. Measurement of speech parameters in casual speech of dementia patients

    NARCIS (Netherlands)

    Ossewaarde, Roelant; Jonkers, Roel; Jalvingh, Fedor; Bastiaanse, Yvonne

    Measurement of speech parameters in casual speech of dementia patients Roelant Adriaan Ossewaarde1,2, Roel Jonkers1, Fedor Jalvingh1,3, Roelien Bastiaanse1 1CLCG, University of Groningen (NL); 2HU University of Applied Sciences Utrecht (NL); 33St. Marienhospital - Vechta, Geriatric Clinic Vechta

  16. Effect of the Number of Presentations on Listener Transcriptions and Reliability in the Assessment of Speech Intelligibility in Children

    Science.gov (United States)

    Lagerberg, Tove B.; Johnels, Jakob Åsberg; Hartelius, Lena; Persson, Christina

    2015-01-01

    Background: The assessment of intelligibility is an essential part of establishing the severity of a speech disorder. The intelligibility of a speaker is affected by a number of different variables relating, "inter alia," to the speech material, the listener and the listener task. Aims: To explore the impact of the number of…

  17. Seeing the talker’s face supports executive processing of speech in steady state noise

    OpenAIRE

    Sushmit eMishra; Thomas eLunner; Thomas eLunner; Thomas eLunner; Stefan eStenfelt; Stefan eStenfelt; Jerker eRönnberg; Mary eRudner

    2013-01-01

    Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT, Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-st...

  18. Usability Assessment of Text-to-Speech Synthesis for Additional Detail in an Automated Telephone Banking System

    OpenAIRE

    Morton , Hazel; Gunson , Nancie; Marshall , Diarmid; McInnes , Fergus; Ayres , Andrea; Jack , Mervyn

    2010-01-01

    Abstract This paper describes a comprehensive usability evaluation of an automated telephone banking system which employs text-to-speech (TTS) synthesis in offering additional detail on customers? account transactions. The paper describes a series of four experiments in which TTS was employed to offer an extra level of detail to recent transactions listings within an established banking service which otherwise uses recorded speech from a professional recording artist. Results from ...

  19. Relating speech production to tongue muscle compressions using tagged and high-resolution magnetic resonance imaging

    Science.gov (United States)

    Xing, Fangxu; Ye, Chuyang; Woo, Jonghye; Stone, Maureen; Prince, Jerry

    2015-03-01

    The human tongue is composed of multiple internal muscles that work collaboratively during the production of speech. Assessment of muscle mechanics can help understand the creation of tongue motion, interpret clinical observations, and predict surgical outcomes. Although various methods have been proposed for computing the tongue's motion, associating motion with muscle activity in an interdigitated fiber framework has not been studied. In this work, we aim to develop a method that reveals different tongue muscles' activities in different time phases during speech. We use fourdimensional tagged magnetic resonance (MR) images and static high-resolution MR images to obtain tongue motion and muscle anatomy, respectively. Then we compute strain tensors and local tissue compression along the muscle fiber directions in order to reveal their shortening pattern. This process relies on the support from multiple image analysis methods, including super-resolution volume reconstruction from MR image slices, segmentation of internal muscles, tracking the incompressible motion of tissue points using tagged images, propagation of muscle fiber directions over time, and calculation of strain in the line of action, etc. We evaluated the method on a control subject and two postglossectomy patients in a controlled speech task. The normal subject's tongue muscle activity shows high correspondence with the production of speech in different time instants, while both patients' muscle activities show different patterns from the control due to their resected tongues. This method shows potential for relating overall tongue motion to particular muscle activity, which may provide novel information for future clinical and scientific studies.

  20. to enhance re-election

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-12-01

    Dec 1, 2013 ... West African Journal of Industrial and Academic Research Vol. West African .... speeches, making speeches, writing speeches ... directing films, planning and producing exhibits for ..... term "user." This operational freedom is.

  1. Development of The Viking Speech Scale to classify the speech of children with cerebral palsy.

    Science.gov (United States)

    Pennington, Lindsay; Virella, Daniel; Mjøen, Tone; da Graça Andrada, Maria; Murray, Janice; Colver, Allan; Himmelmann, Kate; Rackauskaite, Gija; Greitane, Andra; Prasauskiene, Audrone; Andersen, Guro; de la Cruz, Javier

    2013-10-01

    Surveillance registers monitor the prevalence of cerebral palsy and the severity of resulting impairments across time and place. The motor disorders of cerebral palsy can affect children's speech production and limit their intelligibility. We describe the development of a scale to classify children's speech performance for use in cerebral palsy surveillance registers, and its reliability across raters and across time. Speech and language therapists, other healthcare professionals and parents classified the speech of 139 children with cerebral palsy (85 boys, 54 girls; mean age 6.03 years, SD 1.09) from observation and previous knowledge of the children. Another group of health professionals rated children's speech from information in their medical notes. With the exception of parents, raters reclassified children's speech at least four weeks after their initial classification. Raters were asked to rate how easy the scale was to use and how well the scale described the child's speech production using Likert scales. Inter-rater reliability was moderate to substantial (k>.58 for all comparisons). Test-retest reliability was substantial to almost perfect for all groups (k>.68). Over 74% of raters found the scale easy or very easy to use; 66% of parents and over 70% of health care professionals judged the scale to describe children's speech well or very well. We conclude that the Viking Speech Scale is a reliable tool to describe the speech performance of children with cerebral palsy, which can be applied through direct observation of children or through case note review. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Perceptual sensitivity to spectral properties of earlier sounds during speech categorization.

    Science.gov (United States)

    Stilp, Christian E; Assgari, Ashley A

    2018-02-28

    Speech perception is heavily influenced by surrounding sounds. When spectral properties differ between earlier (context) and later (target) sounds, this can produce spectral contrast effects (SCEs) that bias perception of later sounds. For example, when context sounds have more energy in low-F 1 frequency regions, listeners report more high-F 1 responses to a target vowel, and vice versa. SCEs have been reported using various approaches for a wide range of stimuli, but most often, large spectral peaks were added to the context to bias speech categorization. This obscures the lower limit of perceptual sensitivity to spectral properties of earlier sounds, i.e., when SCEs begin to bias speech categorization. Listeners categorized vowels (/ɪ/-/ɛ/, Experiment 1) or consonants (/d/-/g/, Experiment 2) following a context sentence with little spectral amplification (+1 to +4 dB) in frequency regions known to produce SCEs. In both experiments, +3 and +4 dB amplification in key frequency regions of the context produced SCEs, but lesser amplification was insufficient to bias performance. This establishes a lower limit of perceptual sensitivity where spectral differences across sounds can bias subsequent speech categorization. These results are consistent with proposed adaptation-based mechanisms that potentially underlie SCEs in auditory perception. Recent sounds can change what speech sounds we hear later. This can occur when the average frequency composition of earlier sounds differs from that of later sounds, biasing how they are perceived. These "spectral contrast effects" are widely observed when sounds' frequency compositions differ substantially. We reveal the lower limit of these effects, as +3 dB amplification of key frequency regions in earlier sounds was enough to bias categorization of the following vowel or consonant sound. Speech categorization being biased by very small spectral differences across sounds suggests that spectral contrast effects occur

  3. Carbon gas fluxes in re-established wetlands on organic soils differ relative to plant community and hydrology

    Science.gov (United States)

    Miller, Robin L.

    2011-01-01

    We measured CO2 and CH4 fluxes for 6 years following permanent flooding of an agriculturally managed organic soil at two water depths (~25 and ~55 cm standing water) in the Sacramento–San Joaquin Delta, California, as part of research studying C dynamics in re-established wetlands. Flooding rapidly reduced gaseous C losses, and radiocarbon data showed that this, in part, was due to reduced oxidation of "old" C preserved in the organic soils. Both CO2 and CH4 emissions from the water surface increased during the first few growing seasons, concomitant with emergent marsh establishment, and thereafter appeared to stabilize according to plant communities. Areas of emergent marsh vegetation in the shallower wetland had greater net CO2 influx (-485 mg Cm-1 h-1), and lower CH4 emissions (11.5 mg Cm-2 h-1), than in the deeper wetland (-381 and 14.1 mg Cm-2 h-1, respectively). Areas with submerged and floating vegetation in the deeper wetland had CH4 emissions similar to emergent vegetation (11.9 and 12.6 mg Cm-2 h-1, respectively), despite lower net CO2 influx (-102 gC m-2 h-1). Measurements of plant moderated net CO2 influx and CH4 efflux indicated greatest potential reduction of greenhouse gases in the more shallowly flooded wetland.

  4. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

    Science.gov (United States)

    Drijvers, Linda; Ozyurek, Asli

    2017-01-01

    Purpose: This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Method:…

  5. Speech enhancement using emotion dependent codebooks

    NARCIS (Netherlands)

    Naidu, D.H.R.; Srinivasan, S.

    2012-01-01

    Several speech enhancement approaches utilize trained models of clean speech data, such as codebooks, Gaussian mixtures, and hidden Markov models. These models are typically trained on neutral clean speech data, without any emotion. However, in practical scenarios, emotional speech is a common

  6. Strain Map of the Tongue in Normal and ALS Speech Patterns from Tagged and Diffusion MRI.

    Science.gov (United States)

    Xing, Fangxu; Prince, Jerry L; Stone, Maureen; Reese, Timothy G; Atassi, Nazem; Wedeen, Van J; El Fakhri, Georges; Woo, Jonghye

    2018-02-01

    Amyotrophic Lateral Sclerosis (ALS) is a neurological disease that causes death of neurons controlling muscle movements. Loss of speech and swallowing functions is a major impact due to degeneration of the tongue muscles. In speech studies using magnetic resonance (MR) techniques, diffusion tensor imaging (DTI) is used to capture internal tongue muscle fiber structures in three-dimensions (3D) in a non-invasive manner. Tagged magnetic resonance images (tMRI) are used to record tongue motion during speech. In this work, we aim to combine information obtained with both MR imaging techniques to compare the functionality characteristics of the tongue between normal and ALS subjects. We first extracted 3D motion of the tongue using tMRI from fourteen normal subjects in speech. The estimated motion sequences were then warped using diffeomorphic registration into the b0 spaces of the DTI data of two normal subjects and an ALS patient. We then constructed motion atlases by averaging all warped motion fields in each b0 space, and computed strain in the line of action along the muscle fiber directions provided by tractography. Strain in line with the fiber directions provides a quantitative map of the potential active region of the tongue during speech. Comparison between normal and ALS subjects explores the changing volume of compressing tongue tissues in speech facing the situation of muscle degradation. The proposed framework provides for the first time a dynamic map of contracting fibers in ALS speech patterns, and has the potential to provide more insight into the detrimental effects of ALS on speech.

  7. Linguistic contributions to speech-on-speech masking for native and non-native listeners: Language familiarity and semantic content

    Science.gov (United States)

    Brouwer, Susanne; Van Engen, Kristin J.; Calandruccio, Lauren; Bradlow, Ann R.

    2012-01-01

    This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener’s knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language. PMID:22352516

  8. Speech-specificity of two audiovisual integration effects

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2010-01-01

    Seeing the talker’s articulatory mouth movements can influence the auditory speech percept both in speech identification and detection tasks. Here we show that these audiovisual integration effects also occur for sine wave speech (SWS), which is an impoverished speech signal that naïve observers...... often fail to perceive as speech. While audiovisual integration in the identification task only occurred when observers were informed of the speech-like nature of SWS, integration occurred in the detection task both for informed and naïve observers. This shows that both speech-specific and general...... mechanisms underlie audiovisual integration of speech....

  9. The development of multisensory speech perception continues into the late childhood years.

    Science.gov (United States)

    Ross, Lars A; Molholm, Sophie; Blanco, Daniella; Gomez-Ramirez, Manuel; Saint-Amour, Dave; Foxe, John J

    2011-06-01

    Observing a speaker's articulations substantially improves the intelligibility of spoken speech, especially under noisy listening conditions. This multisensory integration of speech inputs is crucial to effective communication. Appropriate development of this ability has major implications for children in classroom and social settings, and deficits in it have been linked to a number of neurodevelopmental disorders, especially autism. It is clear from structural imaging studies that there is a prolonged maturational course within regions of the perisylvian cortex that persists into late childhood, and these regions have been firmly established as being crucial to speech and language functions. Given this protracted maturational timeframe, we reasoned that multisensory speech processing might well show a similarly protracted developmental course. Previous work in adults has shown that audiovisual enhancement in word recognition is most apparent within a restricted range of signal-to-noise ratios (SNRs). Here, we investigated when these properties emerge during childhood by testing multisensory speech recognition abilities in typically developing children aged between 5 and 14 years, and comparing them with those of adults. By parametrically varying SNRs, we found that children benefited significantly less from observing visual articulations, displaying considerably less audiovisual enhancement. The findings suggest that improvement in the ability to recognize speech-in-noise and in audiovisual integration during speech perception continues quite late into the childhood years. The implication is that a considerable amount of multisensory learning remains to be achieved during the later schooling years, and that explicit efforts to accommodate this learning may well be warranted. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd. No claim to original US government works.

  10. Recognizing speech in a novel accent: the motor theory of speech perception reframed.

    Science.gov (United States)

    Moulin-Frier, Clément; Arbib, Michael A

    2013-08-01

    The motor theory of speech perception holds that we perceive the speech of another in terms of a motor representation of that speech. However, when we have learned to recognize a foreign accent, it seems plausible that recognition of a word rarely involves reconstruction of the speech gestures of the speaker rather than the listener. To better assess the motor theory and this observation, we proceed in three stages. Part 1 places the motor theory of speech perception in a larger framework based on our earlier models of the adaptive formation of mirror neurons for grasping, and for viewing extensions of that mirror system as part of a larger system for neuro-linguistic processing, augmented by the present consideration of recognizing speech in a novel accent. Part 2 then offers a novel computational model of how a listener comes to understand the speech of someone speaking the listener's native language with a foreign accent. The core tenet of the model is that the listener uses hypotheses about the word the speaker is currently uttering to update probabilities linking the sound produced by the speaker to phonemes in the native language repertoire of the listener. This, on average, improves the recognition of later words. This model is neutral regarding the nature of the representations it uses (motor vs. auditory). It serve as a reference point for the discussion in Part 3, which proposes a dual-stream neuro-linguistic architecture to revisits claims for and against the motor theory of speech perception and the relevance of mirror neurons, and extracts some implications for the reframing of the motor theory.

  11. Advocate: A Distributed Architecture for Speech-to-Speech Translation

    Science.gov (United States)

    2009-01-01

    tecture, are either wrapped natural-language processing ( NLP ) components or objects developed from scratch using the architecture’s API. GATE is...framework, we put together a demonstration Arabic -to- English speech translation system using both internally developed ( Arabic speech recognition and MT...conditions of our Arabic S2S demonstration system described earlier. Once again, the data size was varied and eighty identical requests were

  12. Potential influence of new doses of A-bomb after re-evaluation of epidemiological research

    International Nuclear Information System (INIS)

    Maruyama, T.

    1983-01-01

    Since the peaceful use of atomic energy appears essential for future human existence, we must provide risk estimates from low-dose exposures to human beings. The largest body of human data has been derived from the studies of atomic bomb survivors in Hiroshima and Nagasaki. Recently, it was proposed by an Oak Ridge National Laboratory group that the current free-in-air doses of atomic bombs are significantly different from the doses recalculated on the basis of the new output spectra of neutrons and gamma rays from the atomic bombs which were declassified by the US Department of Energy in 1976. A joint commission on dose re-evaluation of the United States of America and Japan was established in 1981 to pursue the dose reassessment programme between US and Japanese research groups and to decide an agreed best estimate of organ or tissue doses in survivors as soon as possible. The paper reviews the physical concepts of the re-evaluation of atomic bomb doses and discusses the potential influence of new dosimetric parameters on the epidemiological studies of the atomic bomb survivors in future, although the re-assessment programme is still in progress. (author)

  13. Using the Speech Transmission Index for predicting non-native speech intelligibility

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Houtgast, T.; Steeneken, H.J.M.

    2004-01-01

    While the Speech Transmission Index ~STI! is widely applied for prediction of speech intelligibility in room acoustics and telecommunication engineering, it is unclear how to interpret STI values when non-native talkers or listeners are involved. Based on subjectively measured psychometric functions

  14. Speech Planning Happens before Speech Execution: Online Reaction Time Methods in the Study of Apraxia of Speech

    Science.gov (United States)

    Maas, Edwin; Mailend, Marja-Liisa

    2012-01-01

    Purpose: The purpose of this article is to present an argument for the use of online reaction time (RT) methods to the study of apraxia of speech (AOS) and to review the existing small literature in this area and the contributions it has made to our fundamental understanding of speech planning (deficits) in AOS. Method: Following a brief…

  15. Predicting speech intelligibility in adverse conditions: evaluation of the speech-based envelope power spectrum model

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2011-01-01

    conditions by comparing predictions to measured data from [Kjems et al. (2009). J. Acoust. Soc. Am. 126 (3), 1415-1426] where speech is mixed with four different interferers, including speech-shaped noise, bottle noise, car noise, and cafe noise. The model accounts well for the differences in intelligibility......The speech-based envelope power spectrum model (sEPSM) [Jørgensen and Dau (2011). J. Acoust. Soc. Am., 130 (3), 1475–1487] estimates the envelope signal-to-noise ratio (SNRenv) of distorted speech and accurately describes the speech recognition thresholds (SRT) for normal-hearing listeners...... observed for the different interferers. None of the standardized models successfully describe these data....

  16. Tissue-Point Motion Tracking in the Tongue from Cine MRI and Tagged MRI

    Science.gov (United States)

    Woo, Jonghye; Stone, Maureen; Suo, Yuanming; Murano, Emi Z.; Prince, Jerry L.

    2014-01-01

    Purpose: Accurate tissue motion tracking within the tongue can help professionals diagnose and treat vocal tract--related disorders, evaluate speech quality before and after surgery, and conduct various scientific studies. The authors compared tissue tracking results from 4 widely used deformable registration (DR) methods applied to cine magnetic…

  17. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  18. The Role of Music in Speech Intelligibility of Learners with Post Lingual Hearing Impairment in Selected Units in Lusaka District

    Science.gov (United States)

    Katongo, Emily Mwamba; Ndhlovu, Daniel

    2015-01-01

    This study sought to establish the role of music in speech intelligibility of learners with Post Lingual Hearing Impairment (PLHI) and strategies teachers used to enhance speech intelligibility in learners with PLHI in selected special units for the deaf in Lusaka district. The study used a descriptive research design. Qualitative and quantitative…

  19. Cleft Audit Protocol for Speech (CAPS-A): A Comprehensive Training Package for Speech Analysis

    Science.gov (United States)

    Sell, D.; John, A.; Harding-Bell, A.; Sweeney, T.; Hegarty, F.; Freeman, J.

    2009-01-01

    Background: The previous literature has largely focused on speech analysis systems and ignored process issues, such as the nature of adequate speech samples, data acquisition, recording and playback. Although there has been recognition of the need for training on tools used in speech analysis associated with cleft palate, little attention has been…

  20. Perceived liveliness and speech comprehensibility in aphasia : the effects of direct speech in auditory narratives

    NARCIS (Netherlands)

    Groenewold, Rimke; Bastiaanse, Roelien; Nickels, Lyndsey; Huiskes, Mike

    2014-01-01

    Background: Previous studies have shown that in semi-spontaneous speech, individuals with Broca's and anomic aphasia produce relatively many direct speech constructions. It has been claimed that in 'healthy' communication direct speech constructions contribute to the liveliness, and indirectly to

  1. Preschool speech intelligibility and vocabulary skills predict long-term speech and language outcomes following cochlear implantation in early childhood.

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G; Beer, Jessica; Henning, Shirley C; Colson, Bethany G; Pisoni, David B

    2014-07-01

    Speech and language measures during grade school predict adolescent speech-language outcomes in children who receive cochlear implants (CIs), but no research has examined whether speech and language functioning at even younger ages is predictive of long-term outcomes in this population. The purpose of this study was to examine whether early preschool measures of speech and language performance predict speech-language functioning in long-term users of CIs. Early measures of speech intelligibility and receptive vocabulary (obtained during preschool ages of 3-6 years) in a sample of 35 prelingually deaf, early-implanted children predicted speech perception, language, and verbal working memory skills up to 18 years later. Age of onset of deafness and age at implantation added additional variance to preschool speech intelligibility in predicting some long-term outcome scores, but the relationship between preschool speech-language skills and later speech-language outcomes was not significantly attenuated by the addition of these hearing history variables. These findings suggest that speech and language development during the preschool years is predictive of long-term speech and language functioning in early-implanted, prelingually deaf children. As a result, measures of speech-language functioning at preschool ages can be used to identify and adjust interventions for very young CI users who may be at long-term risk for suboptimal speech and language outcomes.

  2. Simulated rape, orgy, gory killings & hate speech

    DEFF Research Database (Denmark)

    Kierkegaard, Sylvia; Kierkegaard, Patrick

    2011-01-01

    Schwarzenegger v. Entertainment Merchants Association has been identified as one of the most important case on games before the US Supreme Court and the “the single most important challenge gaming has ever face”. To resolve Schwarzenegger, the Justices will need to decide how much First Amendment....... If it follows established precedent dealing with freedom of speech, the sale of gratuitously violent video games to minors will continue with contents for kids getting gorier, bloodier and grittier – all for fun, of course....

  3. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    Science.gov (United States)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  4. Automated Speech Rate Measurement in Dysarthria

    Science.gov (United States)

    Martens, Heidi; Dekens, Tomas; Van Nuffelen, Gwen; Latacz, Lukas; Verhelst, Werner; De Bodt, Marc

    2015-01-01

    Purpose: In this study, a new algorithm for automated determination of speech rate (SR) in dysarthric speech is evaluated. We investigated how reliably the algorithm calculates the SR of dysarthric speech samples when compared with calculation performed by speech-language pathologists. Method: The new algorithm was trained and tested using Dutch…

  5. Simultaneous natural speech and AAC interventions for children with childhood apraxia of speech: lessons from a speech-language pathologist focus group.

    Science.gov (United States)

    Oommen, Elizabeth R; McCarthy, John W

    2015-03-01

    In childhood apraxia of speech (CAS), children exhibit varying levels of speech intelligibility depending on the nature of errors in articulation and prosody. Augmentative and alternative communication (AAC) strategies are beneficial, and commonly adopted with children with CAS. This study focused on the decision-making process and strategies adopted by speech-language pathologists (SLPs) when simultaneously implementing interventions that focused on natural speech and AAC. Eight SLPs, with significant clinical experience in CAS and AAC interventions, participated in an online focus group. Thematic analysis revealed eight themes: key decision-making factors; treatment history and rationale; benefits; challenges; therapy strategies and activities; collaboration with team members; recommendations; and other comments. Results are discussed along with clinical implications and directions for future research.

  6. Speech Recognition on Mobile Devices

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    in the mobile context covering motivations, challenges, fundamental techniques and applications. Three ASR architectures are introduced: embedded speech recognition, distributed speech recognition and network speech recognition. Their pros and cons and implementation issues are discussed. Applications within......The enthusiasm of deploying automatic speech recognition (ASR) on mobile devices is driven both by remarkable advances in ASR technology and by the demand for efficient user interfaces on such devices as mobile phones and personal digital assistants (PDAs). This chapter presents an overview of ASR...

  7. Assessing recall in mothers' retrospective reports: concerns over children's speech and language development.

    Science.gov (United States)

    Russell, Ginny; Miller, Laura L; Ford, Tamsin; Golding, Jean

    2014-01-01

    Retrospective recall about children's symptoms is used to establish early developmental patterns in clinical practice and is also utilised in child psychopathology research. Some studies have indicated that the accuracy of retrospective recall is influenced by life events. Our hypothesis was that an intervention: speech and language therapy, would adversely affect the accuracy of parent recall of early concerns about their child's speech and language development. Mothers (n = 5,390) reported on their child's speech development (child male to female ratio = 50:50) when their children were aged 18 or 30 months, and also reported on these early concerns retrospectively, 10 years later, when their children were 13 years old. Overall reliability of retrospective recall was good, 86 % of respondents accurately recalling their earlier concerns. As hypothesised, however, the speech and language intervention was strongly associated with inaccurate retrospective recall about concerns in the early years (Relative Risk Ratio = 19.03; 95 % CI:14.78-24.48). Attendance at speech therapy was associated with increased recall of concerns that were not reported at the time. The study suggests caution is required when interpreting retrospective reports of abnormal child development as recall may be influenced by intervening events.

  8. Song and speech: examining the link between singing talent and speech imitation ability.

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory.

  9. Song and speech: examining the link between singing talent and speech imitation ability

    Directory of Open Access Journals (Sweden)

    Markus eChristiner

    2013-11-01

    Full Text Available In previous research on speech imitation, musicality and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Fourty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64 % of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66 % of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi could be explained by working memory together with a singer’s sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and sound memory with singing fitting better into the category of "speech" on the productive level and "music" on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. 1. Motor flexibility and the ability to sing improve language and musical function. 2. Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. 3. The ability to sing improves the memory span of the auditory short term memory.

  10. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, Aniko; Moses, Haifa

    2016-01-01

    Speech alarms have been used extensively in aviation and included in International Building Codes (IBC) and National Fire Protection Association's (NFPA) Life Safety Code. However, they have not been implemented on space vehicles. Previous studies conducted at NASA JSC showed that speech alarms lead to faster identification and higher accuracy. This research evaluated updated speech and tone alerts in a laboratory environment and in the Human Exploration Research Analog (HERA) in a realistic setup.

  11. Freedom of Speech Newsletter, September, 1975.

    Science.gov (United States)

    Allen, Winfred G., Jr., Ed.

    The Freedom of Speech Newsletter is the communication medium for the Freedom of Speech Interest Group of the Western Speech Communication Association. The newsletter contains such features as a statement of concern by the National Ad Hoc Committee Against Censorship; Reticence and Free Speech, an article by James F. Vickrey discussing the subtle…

  12. Automatic speech recognition used for evaluation of text-to-speech systems

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Nouza, J.; Vondra, Martin

    -, č. 5042 (2008), s. 136-148 ISSN 0302-9743 R&D Projects: GA AV ČR 1ET301710509; GA AV ČR 1QS108040569 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech recognition * speech processing Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  13. SynFace—Speech-Driven Facial Animation for Virtual Speech-Reading Support

    Directory of Open Access Journals (Sweden)

    Giampiero Salvi

    2009-01-01

    Full Text Available This paper describes SynFace, a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. Firstly, we describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser. Secondly, we report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality. The system, already available for Swedish, English, and Flemish, was optimised for German and for Swedish wide-band speech quality available in TV, radio, and Internet communication. Lastly, the paper covers experiments with nonverbal motions driven from the speech signal. It is shown that turn-taking gestures can be used to affect the flow of human-human dialogues. We have focused specifically on two categories of cues that may be extracted from the acoustic signal: prominence/emphasis and interactional cues (turn-taking/back-channelling.

  14. The Effect of English Verbal Songs on Connected Speech Aspects of Adult English Learners’ Speech Production

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2015-02-01

    Full Text Available The present study was an attempt to investigate the impact of English verbal songs on connected speech aspects of adult English learners’ speech production. 40 participants were selected based on the results of their performance in a piloted and validated version of NELSON test given to 60 intermediate English learners in a language institute in Tehran. Then they were equally distributed in two control and experimental groups and received a validated pretest of reading aloud and speaking in English. Afterward, the treatment was performed in 18 sessions by singing preselected songs culled based on some criteria such as popularity, familiarity, amount, and speed of speech delivery, etc. In the end, the posttests of reading aloud and speaking in English were administered. The results revealed that the treatment had statistically positive effects on the connected speech aspects of English learners’ speech production at statistical .05 level of significance. Meanwhile, the results represented that there was not any significant difference between the experimental group’s mean scores on the posttests of reading aloud and speaking. It was thus concluded that providing the EFL learners with English verbal songs could positively affect connected speech aspects of both modes of speech production, reading aloud and speaking. The Findings of this study have pedagogical implications for language teachers to be more aware and knowledgeable of the benefits of verbal songs to promote speech production of language learners in terms of naturalness and fluency. Keywords: English Verbal Songs, Connected Speech, Speech Production, Reading Aloud, Speaking

  15. Application of Interpersonal Meaning in Hillary’s and Trump’s Election Speeches

    Directory of Open Access Journals (Sweden)

    Kuang Ping

    2017-12-01

    Full Text Available Presidential election speeches, as one significant part of western political life, deserve people’s attention. This paper focuses on the use of interpersonal meaning in political speeches. The nine texts selected from the Internet are analyzed from the perspectives of mood, modality, personal pronoun and tense system based on the theory of Halliday’s Systemic Functional Grammar. It aims to study the way how interpersonal meaning is realized through language by making the contrastive analysis of the speeches given by Hillary and Trump. After making a minute analysis, the paper comes to the following conclusions: (1 As for mood, Trump and Hillary mainly employ the declarative to deliver messages and make statements, and imperative is used to motivate the audiences and narrow the gap between the candidates and the audiences, and interrogative is to make the audiences concentrate on the content of the speeches. (2 With respect to the modality system, the median modal operator holds the dominant position in both Trump’s and Hillary’s speeches to make the speeches less aggressive. In this aspect, Trump does better than Hillary. (3 In regard to personal pronoun, the plural form of first personal pronoun is mainly employed by the two candidates to close the relationship with audiences. (4 Regards to tense system, simple present tense are mostly used to establish the intimacy of the audiences and the candidates. Then two influential factors are discussed. One is their personal background and the other is their language levels. This paper is helpful for people to deeply understand the two candidates’ language differences.

  16. An analysis of the masking of speech by competing speech using self-report data (L)

    OpenAIRE

    Agus, Trevor R.; Akeroyd, Michael A.; Noble, William; Bhullar, Navjot

    2009-01-01

    Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively ...

  17. Illustrated Speech Anatomy.

    Science.gov (United States)

    Shearer, William M.

    Written for students in the fields of speech correction and audiology, the text deals with the following: structures involved in respiration; the skeleton and the processes of inhalation and exhalation; phonation and pitch, the larynx, and esophageal speech; muscles involved in articulation; muscles involved in resonance; and the anatomy of the…

  18. Speech Entrainment Compensates for Broca's Area Damage

    Science.gov (United States)

    Fridriksson, Julius; Basilakos, Alexandra; Hickok, Gregory; Bonilha, Leonardo; Rorden, Chris

    2015-01-01

    Speech entrainment (SE), the online mimicking of an audiovisual speech model, has been shown to increase speech fluency in patients with Broca's aphasia. However, not all individuals with aphasia benefit from SE. The purpose of this study was to identify patterns of cortical damage that predict a positive response SE's fluency-inducing effects. Forty-four chronic patients with left hemisphere stroke (15 female) were included in this study. Participants completed two tasks: 1) spontaneous speech production, and 2) audiovisual SE. Number of different words per minute was calculated as a speech output measure for each task, with the difference between SE and spontaneous speech conditions yielding a measure of fluency improvement. Voxel-wise lesion-symptom mapping (VLSM) was used to relate the number of different words per minute for spontaneous speech, SE, and SE-related improvement to patterns of brain damage in order to predict lesion locations associated with the fluency-inducing response to speech entrainment. Individuals with Broca's aphasia demonstrated a significant increase in different words per minute during speech entrainment versus spontaneous speech. A similar pattern of improvement was not seen in patients with other types of aphasia. VLSM analysis revealed damage to the inferior frontal gyrus predicted this response. Results suggest that SE exerts its fluency-inducing effects by providing a surrogate target for speech production via internal monitoring processes. Clinically, these results add further support for the use of speech entrainment to improve speech production and may help select patients for speech entrainment treatment. PMID:25989443

  19. Establishment and characterization of novel epithelial-like cell lines derived from human periodontal ligament tissue in vitro.

    Science.gov (United States)

    Tansriratanawong, Kallapat; Ishikawa, Hiroshi; Toyomura, Junko; Sato, Soh

    2017-10-01

    In this study, novel human-derived epithelial-like cells (hEPLCs) lines were established from periodontal ligament (PDL) tissues, which were composed of a variety of cell types and exhibited complex cellular activities. To elucidate the putative features distinguishing these from epithelial rest of Malassez (ERM), we characterized hEPLCs based on cell lineage markers and tight junction protein expression. The aim of this study was, therefore, to establish and characterize hEPLCs lines from PDL tissues. The hEPLCs were isolated from PDL of third molar teeth. Cellular morphology and cell organelles were observed thoroughly. The characteristics of epithelial-endothelial-mesenchymal-like cells were compared in several markers by gene expression and immunofluorescence, to ERM and human umbilical-vein endothelial cells (HUVECs). The resistance between cellular junctions was assessed by transepithelial electron resistance, and inflammatory cytokines were detected by ELISA after infecting hEPLCs with periodontopathic bacteria. The hEPLCs developed into small epithelial-like cells in pavement appearance similar to ERM. However, gene expression patterns and immunofluorescence results were different from ERM and HUVECs, especially in tight junction markers (Claudin, ZO-1, and Occludins), and endothelial markers (vWF, CD34). The transepithelial electron resistance indicated higher resistance in hEPLCs, as compared to ERM. Periodontopathic bacteria were phagocytosed with upregulation of inflammatory cytokine secretion within 24 h. In conclusion, hEPLCs that were derived using the single cell isolation method formed tight multilayers colonies, as well as strongly expressed tight junction markers in gene expression and immunofluorescence. Novel hEPLCs lines exhibited differently from ERM, which might provide some specific functions such as metabolic exchange and defense mechanism against bacterial invasion in periodontal tissue.

  20. Re-evaluating the treatment of acute optic neuritis.

    Science.gov (United States)

    Bennett, Jeffrey L; Nickerson, Molly; Costello, Fiona; Sergott, Robert C; Calkwood, Jonathan C; Galetta, Steven L; Balcer, Laura J; Markowitz, Clyde E; Vartanian, Timothy; Morrow, Mark; Moster, Mark L; Taylor, Andrew W; Pace, Thaddeus W W; Frohman, Teresa; Frohman, Elliot M

    2015-07-01

    Clinical case reports and prospective trials have demonstrated a reproducible benefit of hypothalamic-pituitary-adrenal (HPA) axis modulation on the rate of recovery from acute inflammatory central nervous system (CNS) demyelination. As a result, corticosteroid preparations and adrenocorticotrophic hormones are the current mainstays of therapy for the treatment of acute optic neuritis (AON) and acute demyelination in multiple sclerosis.Despite facilitating the pace of recovery, HPA axis modulation and corticosteroids have failed to demonstrate long-term benefit on functional recovery. After AON, patients frequently report visual problems, motion perception difficulties and abnormal depth perception despite 'normal' (20/20) vision. In light of this disparity, the efficacy of these and other therapies for acute demyelination require re-evaluation using modern, high-precision paraclinical tools capable of monitoring tissue injury.In no arena is this more amenable than AON, where a new array of tools in retinal imaging and electrophysiology has advanced our ability to measure the anatomic and functional consequences of optic nerve injury. As a result, AON provides a unique clinical model for evaluating the treatment response of the derivative elements of acute inflammatory CNS injury: demyelination, axonal injury and neuronal degeneration.In this article, we examine current thinking on the mechanisms of immune injury in AON, discuss novel technologies for the assessment of optic nerve structure and function, and assess current and future treatment modalities. The primary aim is to develop a framework for rigorously evaluating interventions in AON and to assess their ability to preserve tissue architecture, re-establish normal physiology and restore optimal neurological function. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  1. Patterns of poststroke brain damage that predict speech production errors in apraxia of speech and aphasia dissociate.

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-06-01

    Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions on whether AOS emerges from a unique pattern of brain damage or as a subelement of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The AOS Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with both AOS and aphasia. Localized brain damage was identified using structural magnetic resonance imaging, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS or aphasia, and brain damage. The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS or aphasia were associated with damage to the temporal lobe and the inferior precentral frontal regions. AOS likely occurs in conjunction with aphasia because of the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. © 2015 American Heart Association, Inc.

  2. A NOVEL APPROACH TO STUTTERED SPEECH CORRECTION

    Directory of Open Access Journals (Sweden)

    Alim Sabur Ajibola

    2016-06-01

    Full Text Available Stuttered speech is a dysfluency rich speech, more prevalent in males than females. It has been associated with insufficient air pressure or poor articulation, even though the root causes are more complex. The primary features include prolonged speech and repetitive speech, while some of its secondary features include, anxiety, fear, and shame. This study used LPC analysis and synthesis algorithms to reconstruct the stuttered speech. The results were evaluated using cepstral distance, Itakura-Saito distance, mean square error, and likelihood ratio. These measures implied perfect speech reconstruction quality. ASR was used for further testing, and the results showed that all the reconstructed speech samples were perfectly recognized while only three samples of the original speech were perfectly recognized.

  3. Study on therapy of 188Re labelled stannic sulfur suspension in nude mice bearing human colon tumor

    International Nuclear Information System (INIS)

    Li Huiyuan; Wu Yuanfang; Dong Mo

    2003-01-01

    The effect of therapy, tissue distribution and stability are studied in nude mice bearing human colon tumor after injections of 188 Re labelled stannic sulfur suspension. The tissues are observed with electric microscope. The results show that 188 Re labelled stannic sulfur suspension is stabilized in the tumor and its inhibitive effects on human colon tumor cells are obvious. 188 Re labelled stannic sulfur suspension is a potential radiopharmaceuticals for therapy of human tumor

  4. A Randomized Controlled Trial for Children with Childhood Apraxia of Speech Comparing Rapid Syllable Transition Treatment and the Nuffield Dyspraxia Programme-Third Edition

    Science.gov (United States)

    Murray, Elizabeth; McCabe, Patricia; Ballard, Kirrie J.

    2015-01-01

    Purpose: This randomized controlled trial compared the experimental Rapid Syllable Transition (ReST) treatment to the Nuffield Dyspraxia Programme-Third Edition (NDP3; Williams & Stephens, 2004), used widely in clinical practice in Australia and the United Kingdom. Both programs aim to improve speech motor planning/programming for children…

  5. Prisoner Fasting as Symbolic Speech: The Ultimate Speech-Action Test.

    Science.gov (United States)

    Sneed, Don; Stonecipher, Harry W.

    The ultimate test of the speech-action dichotomy, as it relates to symbolic speech to be considered by the courts, may be the fasting of prison inmates who use hunger strikes to protest the conditions of their confinement or to make political statements. While hunger strikes have been utilized by prisoners for years as a means of protest, it was…

  6. Childhood apraxia of speech and multiple phonological disorders in Cairo-Egyptian Arabic speaking children: language, speech, and oro-motor differences.

    Science.gov (United States)

    Aziz, Azza Adel; Shohdi, Sahar; Osman, Dalia Mostafa; Habib, Emad Iskander

    2010-06-01

    Childhood apraxia of speech is a neurological childhood speech-sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits. Children with childhood apraxia of speech and those with multiple phonological disorder share some common phonological errors that can be misleading in diagnosis. This study posed a question about a possible significant difference in language, speech and non-speech oral performances between children with childhood apraxia of speech, multiple phonological disorder and normal children that can be used for a differential diagnostic purpose. 30 pre-school children between the ages of 4 and 6 years served as participants. Each of these children represented one of 3 possible subject-groups: Group 1: multiple phonological disorder; Group 2: suspected cases of childhood apraxia of speech; Group 3: control group with no communication disorder. Assessment procedures included: parent interviews; testing of non-speech oral motor skills and testing of speech skills. Data showed that children with suspected childhood apraxia of speech showed significantly lower language score only in their expressive abilities. Non-speech tasks did not identify significant differences between childhood apraxia of speech and multiple phonological disorder groups except for those which required two sequential motor performances. In speech tasks, both consonant and vowel accuracy were significantly lower and inconsistent in childhood apraxia of speech group than in the multiple phonological disorder group. Syllable number, shape and sequence accuracy differed significantly in the childhood apraxia of speech group than the other two groups. In addition, children with childhood apraxia of speech showed greater difficulty in processing prosodic features indicating a clear need to address these variables for differential diagnosis and treatment of children with childhood apraxia of speech. Copyright (c

  7. Collective speech acts

    NARCIS (Netherlands)

    Meijers, A.W.M.; Tsohatzidis, S.L.

    2007-01-01

    From its early development in the 1960s, speech act theory always had an individualistic orientation. It focused exclusively on speech acts performed by individual agents. Paradigmatic examples are ‘I promise that p’, ‘I order that p’, and ‘I declare that p’. There is a single speaker and a single

  8. Dosimetric benefit of adaptive re-planning in pancreatic cancer stereotactic body radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yongbao [Department of Engineering Physics, Tsinghua University, Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing (China); Center for Advanced Radiotherapy Technologies University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX (United States); Hoisak, Jeremy D.P.; Li, Nan; Jiang, Carrie [Center for Advanced Radiotherapy Technologies University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of California San Diego, La Jolla, CA (United States); Tian, Zhen [Center for Advanced Radiotherapy Technologies University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX (United States); Gautier, Quentin; Zarepisheh, Masoud [Center for Advanced Radiotherapy Technologies University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of California San Diego, La Jolla, CA (United States); Wu, Zhaoxia; Liu, Yaqiang [Department of Engineering Physics, Tsinghua University, Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Beijing (China); Jia, Xun [Center for Advanced Radiotherapy Technologies University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of California San Diego, La Jolla, CA (United States); Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX (United States); and others

    2015-01-01

    Stereotactic body radiotherapy (SBRT) shows promise in unresectable pancreatic cancer, though this treatment modality has high rates of normal tissue toxicity. This study explores the dosimetric utility of daily adaptive re-planning with pancreas SBRT. We used a previously developed supercomputing online re-planning environment (SCORE) to re-plan 10 patients with pancreas SBRT. Tumor and normal tissue contours were deformed from treatment planning computed tomographies (CTs) and transferred to daily cone-beam CT (CBCT) scans before re-optimizing each daily treatment plan. We compared the intended radiation dose, the actual radiation dose, and the optimized radiation dose for the pancreas tumor planning target volume (PTV) and the duodenum. Treatment re-optimization improved coverage of the PTV and reduced dose to the duodenum. Within the PTV, the actual hot spot (volume receiving 110% of the prescription dose) decreased from 4.5% to 0.5% after daily adaptive re-planning. Within the duodenum, the volume receiving the prescription dose decreased from 0.9% to 0.3% after re-planning. It is noteworthy that variation in the amount of air within a patient's stomach substantially changed dose to the PTV. Adaptive re-planning with pancreas SBRT has the ability to improve dose to the tumor and decrease dose to the nearby duodenum, thereby reducing the risk of toxicity.

  9. Commencement Speech as a Hybrid Polydiscursive Practice

    Directory of Open Access Journals (Sweden)

    Светлана Викторовна Иванова

    2017-12-01

    Full Text Available Discourse and media communication researchers pay attention to the fact that popular discursive and communicative practices have a tendency to hybridization and convergence. Discourse which is understood as language in use is flexible. Consequently, it turns out that one and the same text can represent several types of discourses. A vivid example of this tendency is revealed in American commencement speech / commencement address / graduation speech. A commencement speech is a speech university graduates are addressed with which in compliance with the modern trend is delivered by outstanding media personalities (politicians, athletes, actors, etc.. The objective of this study is to define the specificity of the realization of polydiscursive practices within commencement speech. The research involves discursive, contextual, stylistic and definitive analyses. Methodologically the study is based on the discourse analysis theory, in particular the notion of a discursive practice as a verbalized social practice makes up the conceptual basis of the research. This research draws upon a hundred commencement speeches delivered by prominent representatives of American society since 1980s till now. In brief, commencement speech belongs to institutional discourse public speech embodies. Commencement speech institutional parameters are well represented in speeches delivered by people in power like American and university presidents. Nevertheless, as the results of the research indicate commencement speech institutional character is not its only feature. Conceptual information analysis enables to refer commencement speech to didactic discourse as it is aimed at teaching university graduates how to deal with challenges life is rich in. Discursive practices of personal discourse are also actively integrated into the commencement speech discourse. More than that, existential discursive practices also find their way into the discourse under study. Commencement

  10. The effectiveness of Speech-Music Therapy for Aphasia (SMTA) in five speakers with Apraxia of Speech and aphasia

    NARCIS (Netherlands)

    Hurkmans, Joost; Jonkers, Roel; de Bruijn, Madeleen; Boonstra, Anne M.; Hartman, Paul P.; Arendzen, Hans; Reinders - Messelink, Heelen

    2015-01-01

    Background: Several studies using musical elements in the treatment of neurological language and speech disorders have reported improvement of speech production. One such programme, Speech-Music Therapy for Aphasia (SMTA), integrates speech therapy and music therapy (MT) to treat the individual with

  11. Current trends in multilingual speech processing

    Indian Academy of Sciences (India)

    2016-08-26

    ; speech-to-speech translation; language identification. ... interest owing to two strong driving forces. Firstly, technical advances in speech recognition and synthesis are posing new challenges and opportunities to researchers.

  12. Do long-term tongue piercings affect speech quality?

    Science.gov (United States)

    Heinen, Esther; Birkholz, Peter; Willmes, Klaus; Neuschaefer-Rube, Christiane

    2017-10-01

    To explore possible effects of tongue piercing on perceived speech quality. Using a quasi-experimental design, we analyzed the effect of tongue piercing on speech in a perception experiment. Samples of spontaneous speech and read speech were recorded from 20 long-term pierced and 20 non-pierced individuals (10 males, 10 females each). The individuals having a tongue piercing were recorded with attached and removed piercing. The audio samples were blindly rated by 26 female and 20 male laypersons and by 5 female speech-language pathologists with regard to perceived speech quality along 5 dimensions: speech clarity, speech rate, prosody, rhythm and fluency. We found no statistically significant differences for any of the speech quality dimensions between the pierced and non-pierced individuals, neither for the read nor for the spontaneous speech. In addition, neither length nor position of piercing had a significant effect on speech quality. The removal of tongue piercings had no effects on speech performance either. Rating differences between laypersons and speech-language pathologists were not dependent on the presence of a tongue piercing. People are able to perfectly adapt their articulation to long-term tongue piercings such that their speech quality is not perceptually affected.

  13. Patterns of Post-Stroke Brain Damage that Predict Speech Production Errors in Apraxia of Speech and Aphasia Dissociate

    Science.gov (United States)

    Basilakos, Alexandra; Rorden, Chris; Bonilha, Leonardo; Moser, Dana; Fridriksson, Julius

    2015-01-01

    Background and Purpose Acquired apraxia of speech (AOS) is a motor speech disorder caused by brain damage. AOS often co-occurs with aphasia, a language disorder in which patients may also demonstrate speech production errors. The overlap of speech production deficits in both disorders has raised questions regarding if AOS emerges from a unique pattern of brain damage or as a sub-element of the aphasic syndrome. The purpose of this study was to determine whether speech production errors in AOS and aphasia are associated with distinctive patterns of brain injury. Methods Forty-three patients with history of a single left-hemisphere stroke underwent comprehensive speech and language testing. The Apraxia of Speech Rating Scale was used to rate speech errors specific to AOS versus speech errors that can also be associated with AOS and/or aphasia. Localized brain damage was identified using structural MRI, and voxel-based lesion-impairment mapping was used to evaluate the relationship between speech errors specific to AOS, those that can occur in AOS and/or aphasia, and brain damage. Results The pattern of brain damage associated with AOS was most strongly associated with damage to cortical motor regions, with additional involvement of somatosensory areas. Speech production deficits that could be attributed to AOS and/or aphasia were associated with damage to the temporal lobe and the inferior pre-central frontal regions. Conclusion AOS likely occurs in conjunction with aphasia due to the proximity of the brain areas supporting speech and language, but the neurobiological substrate for each disorder differs. PMID:25908457

  14. Adaptive plan selection vs. re-optimisation in radiotherapy for bladder cancer: A dose accumulation comparison

    International Nuclear Information System (INIS)

    Vestergaard, Anne; Muren, Ludvig Paul; Søndergaard, Jimmi; Elstrøm, Ulrik Vindelev; Høyer, Morten; Petersen, Jørgen B.

    2013-01-01

    Purpose: Patients with urinary bladder cancer are obvious candidates for adaptive radiotherapy (ART) due to large inter-fractional variation in bladder volumes. In this study we have compared the normal tissue sparing potential of two ART strategies: daily plan selection (PlanSelect) and daily plan re-optimisation (ReOpt). Materials and methods: Seven patients with bladder cancer were included in the study. For the PlanSelect strategy, a patient-specific library of three plans was generated, and the most suitable plan based on the pre-treatment cone beam CT (CBCT) was selected. For the daily ReOpt strategy, plans were re-optimised based on the CBCT from each daily fraction. Bladder contours were propagated to the CBCT scan using deformable image registration (DIR). Accumulated dose distributions for the ART strategies as well as the non-adaptive RT were calculated. Results: A considerable sparing of normal tissue was achieved with both ART approaches, with ReOpt being the superior technique. Compared to non-adaptive RT, the volume receiving more than 57 Gy (corresponding to 95% of the prescribed dose) was reduced to 66% (range 48–100%) for PlanSelect and to 41% (range 33–50%) for ReOpt. Conclusion: This study demonstrated a considerable normal tissue sparing potential of ART for bladder irradiation, with clearly superior results by daily adaptive re-optimisation

  15. Progressive apraxia of speech as a window into the study of speech planning processes.

    Science.gov (United States)

    Laganaro, Marina; Croisier, Michèle; Bagou, Odile; Assal, Frédéric

    2012-09-01

    We present a 3-year follow-up study of a patient with progressive apraxia of speech (PAoS), aimed at investigating whether the theoretical organization of phonetic encoding is reflected in the progressive disruption of speech. As decreased speech rate was the most striking pattern of disruption during the first 2 years, durational analyses were carried out longitudinally on syllables excised from spontaneous, repetition and reading speech samples. The crucial result of the present study is the demonstration of an effect of syllable frequency on duration: the progressive disruption of articulation rate did not affect all syllables in the same way, but followed a gradient that was function of the frequency of use of syllable-sized motor programs. The combination of data from this case of PAoS with previous psycholinguistic and neurolinguistic data, points to a frequency organization of syllable-sized speech-motor plans. In this study we also illustrate how studying PAoS can be exploited in theoretical and clinical investigations of phonetic encoding as it represents a unique opportunity to investigate speech while it progressively disrupts. Copyright © 2011 Elsevier Srl. All rights reserved.

  16. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults

    Directory of Open Access Journals (Sweden)

    Stephanie Rosemann

    2017-06-01

    Full Text Available Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI. This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.

  17. Musicians do not benefit from differences in fundamental frequency when listening to speech in competing speech backgrounds

    DEFF Research Database (Denmark)

    Madsen, Sara Miay Kim; Whiteford, Kelly L.; Oxenham, Andrew J.

    2017-01-01

    Recent studies disagree on whether musicians have an advantage over non-musicians in understanding speech in noise. However, it has been suggested that musicians may be able to use diferences in fundamental frequency (F0) to better understand target speech in the presence of interfering talkers....... Here we studied a relatively large (N=60) cohort of young adults, equally divided between nonmusicians and highly trained musicians, to test whether the musicians were better able to understand speech either in noise or in a two-talker competing speech masker. The target speech and competing speech...... were presented with either their natural F0 contours or on a monotone F0, and the F0 diference between the target and masker was systematically varied. As expected, speech intelligibility improved with increasing F0 diference between the target and the two-talker masker for both natural and monotone...

  18. Cutaneous collateral axonal sprouting re-innervates the skin component and restores sensation of denervated Swine osteomyocutaneous alloflaps.

    Directory of Open Access Journals (Sweden)

    Zuhaib Ibrahim

    Full Text Available Reconstructive transplantation such as extremity and face transplantation is a viable treatment option for select patients with devastating tissue loss. Sensorimotor recovery is a critical determinant of overall success of such transplants. Although motor function recovery has been extensively studied, mechanisms of sensory re-innervation are not well established. Recent clinical reports of face transplants confirm progressive sensory improvement even in cases where optimal repair of sensory nerves was not achieved. Two forms of sensory nerve regeneration are known. In regenerative sprouting, axonal outgrowth occurs from the transected nerve stump while in collateral sprouting, reinnervation of denervated tissue occurs through growth of uninjured axons into the denervated tissue. The latter mechanism may be more important in settings where transected sensory nerves cannot be re-apposed. In this study, denervated osteomyocutaneous alloflaps (hind- limb transplants from Major Histocompatibility Complex (MHC-defined MGH miniature swine were performed to specifically evaluate collateral axonal sprouting for cutaneous sensory re-innervation. The skin component of the flap was externalized and serial skin sections extending from native skin to the grafted flap were biopsied. In order to visualize regenerating axonal structures in the dermis and epidermis, 50 um frozen sections were immunostained against axonal and Schwann cell markers. In all alloflaps, collateral axonal sprouts from adjacent recipient skin extended into the denervated skin component along the dermal-epidermal junction from the periphery towards the center. On day 100 post-transplant, regenerating sprouts reached 0.5 cm into the flap centripetally. Eight months following transplant, epidermal fibers were visualized 1.5 cm from the margin (rate of regeneration 0.06 mm per day. All animals had pinprick sensation in the periphery of the transplanted skin within 3 months post

  19. Novel Techniques for Dialectal Arabic Speech Recognition

    CERN Document Server

    Elmahdy, Mohamed; Minker, Wolfgang

    2012-01-01

    Novel Techniques for Dialectal Arabic Speech describes approaches to improve automatic speech recognition for dialectal Arabic. Since speech resources for dialectal Arabic speech recognition are very sparse, the authors describe how existing Modern Standard Arabic (MSA) speech data can be applied to dialectal Arabic speech recognition, while assuming that MSA is always a second language for all Arabic speakers. In this book, Egyptian Colloquial Arabic (ECA) has been chosen as a typical Arabic dialect. ECA is the first ranked Arabic dialect in terms of number of speakers, and a high quality ECA speech corpus with accurate phonetic transcription has been collected. MSA acoustic models were trained using news broadcast speech. In order to cross-lingually use MSA in dialectal Arabic speech recognition, the authors have normalized the phoneme sets for MSA and ECA. After this normalization, they have applied state-of-the-art acoustic model adaptation techniques like Maximum Likelihood Linear Regression (MLLR) and M...

  20. Speech and Communication Disorders

    Science.gov (United States)

    ... to being completely unable to speak or understand speech. Causes include Hearing disorders and deafness Voice problems, ... or those caused by cleft lip or palate Speech problems like stuttering Developmental disabilities Learning disorders Autism ...

  1. Speech of people with autism: Echolalia and echolalic speech

    OpenAIRE

    Błeszyński, Jacek Jarosław

    2013-01-01

    Speech of people with autism is recognised as one of the basic diagnostic, therapeutic and theoretical problems. One of the most common symptoms of autism in children is echolalia, described here as being of different types and severity. This paper presents the results of studies into different levels of echolalia, both in normally developing children and in children diagnosed with autism, discusses the differences between simple echolalia and echolalic speech - which can be considered to b...

  2. Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study.

    Science.gov (United States)

    Kumar, G Vinodh; Halder, Tamesh; Jaiswal, Amit K; Mukherjee, Abhishek; Roy, Dipanjan; Banerjee, Arpan

    2016-01-01

    study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.

  3. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: Introduction

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: The goal of this article is to introduce the pause marker (PM), a single-sign diagnostic marker proposed to discriminate early or persistent childhood apraxia of speech (CAS) from speech delay.

  4. A speech production model including the nasal Cavity: A novel approach to articulatory analysis of speech signals

    DEFF Research Database (Denmark)

    Olesen, Morten

    In order to obtain articulatory analysis of speech production the model is improved. the standard model, as used in LPC analysis, to a large extent only models the acoustic properties of speech signal as opposed to articulatory modelling of the speech production. In spite of this the LPC model...... is by far the most widely used model in speech technology....

  5. Successful and rapid response of speech bulb reduction program combined with speech therapy in velopharyngeal dysfunction: a case report.

    Science.gov (United States)

    Shin, Yu-Jeong; Ko, Seung-O

    2015-12-01

    Velopharyngeal dysfunction in cleft palate patients following the primary palate repair may result in nasal air emission, hypernasality, articulation disorder and poor intelligibility of speech. Among conservative treatment methods, speech aid prosthesis combined with speech therapy is widely used method. However because of its long time of treatment more than a year and low predictability, some clinicians prefer a surgical intervention. Thus, the purpose of this report was to increase an attention on the effectiveness of speech aid prosthesis by introducing a case that was successfully treated. In this clinical report, speech bulb reduction program with intensive speech therapy was applied for a patient with velopharyngeal dysfunction and it was rapidly treated by 5months which was unusually short period for speech aid therapy. Furthermore, advantages of pre-operative speech aid therapy were discussed.

  6. Speech Intelligibility Evaluation for Mobile Phones

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Cubick, Jens; Dau, Torsten

    2015-01-01

    In the development process of modern telecommunication systems, such as mobile phones, it is common practice to use computer models to objectively evaluate the transmission quality of the system, instead of time-consuming perceptual listening tests. Such models have typically focused on the quality...... of the transmitted speech, while little or no attention has been provided to speech intelligibility. The present study investigated to what extent three state-of-the art speech intelligibility models could predict the intelligibility of noisy speech transmitted through mobile phones. Sentences from the Danish...... Dantale II speech material were mixed with three different kinds of background noise, transmitted through three different mobile phones, and recorded at the receiver via a local network simulator. The speech intelligibility of the transmitted sentences was assessed by six normal-hearing listeners...

  7. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Directory of Open Access Journals (Sweden)

    Alena Galilee

    Full Text Available Previous event-related potential (ERP research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD. However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600 when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  8. Atypical speech versus non-speech detection and discrimination in 4- to 6- yr old children with autism spectrum disorder: An ERP study.

    Science.gov (United States)

    Galilee, Alena; Stefanidou, Chrysi; McCleery, Joseph P

    2017-01-01

    Previous event-related potential (ERP) research utilizing oddball stimulus paradigms suggests diminished processing of speech versus non-speech sounds in children with an Autism Spectrum Disorder (ASD). However, brain mechanisms underlying these speech processing abnormalities, and to what extent they are related to poor language abilities in this population remain unknown. In the current study, we utilized a novel paired repetition paradigm in order to investigate ERP responses associated with the detection and discrimination of speech and non-speech sounds in 4- to 6-year old children with ASD, compared with gender and verbal age matched controls. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited N330 match/mismatch responses measured from temporal electrodes, reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only over temporal electrodes in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (central N600, temporal P600) when a non-speech sound was followed by a speech sound, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Together, these results demonstrate the presence of atypical speech versus non-speech processing in children with ASD when compared with typically developing children matched on verbal age.

  9. Radiological evaluation of esophageal speech on total laryngectomee

    International Nuclear Information System (INIS)

    Chung, Tae Sub; Suh, Jung Ho; Kim, Dong Ik; Kim, Gwi Eon; Hong, Won Phy; Lee, Won Sang

    1988-01-01

    Total laryngectomee requires some form of alaryngeal speech for communication. Generally, esophageal speech is regarded as the most available and comfortable technique for alaryngeal speech. But esophageal speech is difficult to train, so many patients are unable to attain esophageal speech for communication. To understand mechanism of esophageal of esophageal speech on total laryngectomee, evaluation of anatomical change of the pharyngoesophageal segment is very important. We used video fluoroscopy for evaluation of pharyngesophageal segment during esophageal speech. Eighteen total laryngectomees were evaluated with video fluoroscopy from Dec. 1986 to May 1987 at Y.U.M.C. Our results were as follows: 1. Peseudoglottis is the most important factor for esophageal speech, which is visualized in 7 cases among 8 cases of excellent esophageal speech group. 2. Two cases of longer A-P diameter at the pseudoglottis have the best quality of esophageal speech than others. 3. Two cases of mucosal vibration at the pharyngoesophageal segment can make excellent esophageal speech. 4. The cases of failed esophageal speech are poor aerophagia in 6 cases, abscence of pseudoglottis in 4 cases and poor air ejection in 3 cases. 5. Aerophagia synchronizes with diaphragmatic motion in 8 cases of excellent esophageal speech.

  10. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    Directory of Open Access Journals (Sweden)

    Andreas Maier

    2010-01-01

    Full Text Available In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngectomized patients with cancer of the larynx or hypopharynx and 49 German patients who had suffered from oral cancer. The speech recognition provides the percentage of correctly recognized words of a sequence, that is, the word recognition rate. Automatic evaluation was compared to perceptual ratings by a panel of experts and to an age-matched control group. Both patient groups showed significantly lower word recognition rates than the control group. Automatic speech recognition yielded word recognition rates which complied with experts' evaluation of intelligibility on a significant level. Automatic speech recognition serves as a good means with low effort to objectify and quantify the most important aspect of pathologic speech—the intelligibility. The system was successfully applied to voice and speech disorders.

  11. On speech recognition during anaesthesia

    DEFF Research Database (Denmark)

    Alapetite, Alexandre

    2007-01-01

    This PhD thesis in human-computer interfaces (informatics) studies the case of the anaesthesia record used during medical operations and the possibility to supplement it with speech recognition facilities. Problems and limitations have been identified with the traditional paper-based anaesthesia...... and inaccuracies in the anaesthesia record. Supplementing the electronic anaesthesia record interface with speech input facilities is proposed as one possible solution to a part of the problem. The testing of the various hypotheses has involved the development of a prototype of an electronic anaesthesia record...... interface with speech input facilities in Danish. The evaluation of the new interface was carried out in a full-scale anaesthesia simulator. This has been complemented by laboratory experiments on several aspects of speech recognition for this type of use, e.g. the effects of noise on speech recognition...

  12. 'If they're helping me then how can I be independent?' The perceptions and experience of users of home-care re-ablement services.

    Science.gov (United States)

    Wilde, Alison; Glendinning, Caroline

    2012-11-01

    Home-care re-ablement is a short-term, intensive service that helps people to (re-) establish their capacity and confidence in performing basic personal care and domestic tasks at home, thereby reducing needs for longer term help. Home-care re-ablement is an increasingly common feature of English adult social care services; there are similar service developments in Australia and New Zealand. This paper presents evidence from semi-structured interviews conducted in early 2010 with 34 service users and 10 carers from five established re-ablement services in England. The interviews formed part of a larger, mixed-methods study into the immediate and longer term impacts and cost-effectiveness of home-care re-ablement services. There was clear evidence that interviewees felt that they had benefitted from re-ablement services; most service users and their families valued the intervention. However, the interviews also identified potential barriers to optimal independence for some service users, particularly those with progressive conditions, sensory impairments, specific cultural needs, or who lived alone. The beneficial impacts of re-ablement could also be reduced if users failed to understand the aims of the service, or if the service failed to provide support with activities or outcomes that were particularly important to the service user or carer. Putting the lived experiences of people receiving re-ablement at the centre of analysis, this paper concludes that re-ablement services have the potential for enhanced effectiveness, particularly if there is more understanding of users' own priorities and concepts of independence. © 2012 Blackwell Publishing Ltd.

  13. Re-establishing an ecological discourse in the policy debate over how to value ecosystems and biodiversity.

    Science.gov (United States)

    Spash, Clive L; Aslaksen, Iulie

    2015-08-15

    In this paper we explore the discourses of ecology, environmental economics, new environmental pragmatism and social ecological economics as they relate to the value of ecosystems and biodiversity. Conceptualizing biodiversity and ecosystems as goods and services that can be represented by monetary values in policy processes is an economic discourse being increasingly championed by ecologists and conservation biologists. The latter promote a new environmental pragmatism internationally as hardwiring biodiversity and ecosystems services into finance. The approach adopts a narrow instrumentalism, denies value pluralism and incommensurability, and downplays the role of scientific knowledge. Re-establishing an ecological discourse in biodiversity policy implies a crucial role for biophysical indicators as independent policy targets, exemplified in this paper by the Nature Index for Norway. Yet, there is a recognisable need to go beyond a traditional ecological approach to one recognising the interconnections of social, ecological and economic problems. This requires reviving and relating to a range of alternative ecologically informed discourses, including an ecofeminist perspective, in order to transform the increasingly dominant and destructive relationship of humans separated from and domineering over Nature. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. 78 FR 63152 - Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and...

    Science.gov (United States)

    2013-10-23

    ...] Telecommunications Relay Services and Speech-to-Speech Services for Individuals With Hearing and Speech Disabilities... for telecommunications relay services (TRS) by eliminating standards for Internet-based relay services... comments, identified by CG Docket No. 03-123, by any of the following methods: Electronic Filers: Comments...

  15. Speech Alarms Pilot Study

    Science.gov (United States)

    Sandor, A.; Moses, H. R.

    2016-01-01

    Currently on the International Space Station (ISS) and other space vehicles Caution & Warning (C&W) alerts are represented with various auditory tones that correspond to the type of event. This system relies on the crew's ability to remember what each tone represents in a high stress, high workload environment when responding to the alert. Furthermore, crew receive a year or more in advance of the mission that makes remembering the semantic meaning of the alerts more difficult. The current system works for missions conducted close to Earth where ground operators can assist as needed. On long duration missions, however, they will need to work off-nominal events autonomously. There is evidence that speech alarms may be easier and faster to recognize, especially during an off-nominal event. The Information Presentation Directed Research Project (FY07-FY09) funded by the Human Research Program included several studies investigating C&W alerts. The studies evaluated tone alerts currently in use with NASA flight deck displays along with candidate speech alerts. A follow-on study used four types of speech alerts to investigate how quickly various types of auditory alerts with and without a speech component - either at the beginning or at the end of the tone - can be identified. Even though crew were familiar with the tone alert from training or direct mission experience, alerts starting with a speech component were identified faster than alerts starting with a tone. The current study replicated the results from the previous study in a more rigorous experimental design to determine if the candidate speech alarms are ready for transition to operations or if more research is needed. Four types of alarms (caution, warning, fire, and depressurization) were presented to participants in both tone and speech formats in laboratory settings and later in the Human Exploration Research Analog (HERA). In the laboratory study, the alerts were presented by software and participants were

  16. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  17. Temporal predictive mechanisms modulate motor reaction time during initiation and inhibition of speech and hand movement.

    Science.gov (United States)

    Johari, Karim; Behroozmand, Roozbeh

    2017-08-01

    Skilled movement is mediated by motor commands executed with extremely fine temporal precision. The question of how the brain incorporates temporal information to perform motor actions has remained unanswered. This study investigated the effect of stimulus temporal predictability on response timing of speech and hand movement. Subjects performed a randomized vowel vocalization or button press task in two counterbalanced blocks in response to temporally-predictable and unpredictable visual cues. Results indicated that speech and hand reaction time was decreased for predictable compared with unpredictable stimuli. This finding suggests that a temporal predictive code is established to capture temporal dynamics of sensory cues in order to produce faster movements in responses to predictable stimuli. In addition, results revealed a main effect of modality, indicating faster hand movement compared with speech. We suggest that this effect is accounted for by the inherent complexity of speech production compared with hand movement. Lastly, we found that movement inhibition was faster than initiation for both hand and speech, suggesting that movement initiation requires a longer processing time to coordinate activities across multiple regions in the brain. These findings provide new insights into the mechanisms of temporal information processing during initiation and inhibition of speech and hand movement. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. A Clinician Survey of Speech and Non-Speech Characteristics of Neurogenic Stuttering

    Science.gov (United States)

    Theys, Catherine; van Wieringen, Astrid; De Nil, Luc F.

    2008-01-01

    This study presents survey data on 58 Dutch-speaking patients with neurogenic stuttering following various neurological injuries. Stroke was the most prevalent cause of stuttering in our patients, followed by traumatic brain injury, neurodegenerative diseases, and other causes. Speech and non-speech characteristics were analyzed separately for…

  19. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders

    CERN Document Server

    Baghai-Ravary, Ladan

    2013-01-01

    Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders provides a survey of methods designed to aid clinicians in the diagnosis and monitoring of speech disorders such as dysarthria and dyspraxia, with an emphasis on the signal processing techniques, statistical validity of the results presented in the literature, and the appropriateness of methods that do not require specialized equipment, rigorously controlled recording procedures or highly skilled personnel to interpret results. Such techniques offer the promise of a simple and cost-effective, yet objective, assessment of a range of medical conditions, which would be of great value to clinicians. The ideal scenario would begin with the collection of examples of the clients’ speech, either over the phone or using portable recording devices operated by non-specialist nursing staff. The recordings could then be analyzed initially to aid diagnosis of conditions, and subsequently to monitor the clients’ progress and res...

  20. Temporal modulations in speech and music.

    Science.gov (United States)

    Ding, Nai; Patel, Aniruddh D; Chen, Lin; Butler, Henry; Luo, Cheng; Poeppel, David

    2017-10-01

    Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Re-interventions on the thoracic and thoracoabdominal aorta in patients with Marfan syndrome.

    Science.gov (United States)

    Schoenhoff, Florian S; Carrel, Thierry P

    2017-11-01

    The advent of multi-gene panel genetic testing and the discovery of new syndromic and non-syndromic forms of connective tissue disorders have established thoracic aortic aneurysms as a genetically mediated disease. Surgical results in patients with Marfan syndrome (MFS) provide an important benchmark for this patient population. Prophylactic aortic root surgery prevents acute dissection and has contributed to the improved survival of MFS patients. In the majority of patients, re-interventions are driven by a history of dissection. Patients undergoing elective root repair have a low risk for re-interventions on the root itself. Experienced centers have results after valve-sparing procedures at 10 years comparable with those seen after a modified Bentall procedure. In patients where only the ascending aorta was replaced during the initial surgery, re-intervention rates are high as the root continues to dilate. The fate of the aortic arch in MFS patients presenting with dissection is strongly correlated with the extent of the initial surgery. Not replacing the entire ascending aorta and proximal aortic arch results in a high rate of re-interventions. Nevertheless, the additional burden of replacing the entire aortic arch during emergent proximal repair is not very well defined and makes comparisons with patients undergoing elective arch replacement difficult. Interestingly, replacing the entire aortic arch during initial surgery for acute dissection does not protect from re-interventions on downstream aortic segments. MFS patients suffering from type B dissection have a high risk for re-interventions ultimately leading up to replacement of the entire thoracoabdominal aorta even if the dissection was deemed uncomplicated by conventional criteria. While current guidelines do not recommend the implantation of stent grafts in MFS patients, implantation of a frozen-elephant-trunk to create a stable proximal landing zone for future endovascular or open procedures has

  2. Song and speech: examining the link between singing talent and speech imitation ability

    Science.gov (United States)

    Christiner, Markus; Reiterer, Susanne M.

    2013-01-01

    In previous research on speech imitation, musicality, and an ability to sing were isolated as the strongest indicators of good pronunciation skills in foreign languages. We, therefore, wanted to take a closer look at the nature of the ability to sing, which shares a common ground with the ability to imitate speech. This study focuses on whether good singing performance predicts good speech imitation. Forty-one singers of different levels of proficiency were selected for the study and their ability to sing, to imitate speech, their musical talent and working memory were tested. Results indicated that singing performance is a better indicator of the ability to imitate speech than the playing of a musical instrument. A multiple regression revealed that 64% of the speech imitation score variance could be explained by working memory together with educational background and singing performance. A second multiple regression showed that 66% of the speech imitation variance of completely unintelligible and unfamiliar language stimuli (Hindi) could be explained by working memory together with a singer's sense of rhythm and quality of voice. This supports the idea that both vocal behaviors have a common grounding in terms of vocal and motor flexibility, ontogenetic and phylogenetic development, neural orchestration and auditory memory with singing fitting better into the category of “speech” on the productive level and “music” on the acoustic level. As a result, good singers benefit from vocal and motor flexibility, productively and cognitively, in three ways. (1) Motor flexibility and the ability to sing improve language and musical function. (2) Good singers retain a certain plasticity and are open to new and unusual sound combinations during adulthood both perceptually and productively. (3) The ability to sing improves the memory span of the auditory working memory. PMID:24319438

  3. Dysfluencies in the speech of adults with intellectual disabilities and reported speech difficulties.

    Science.gov (United States)

    Coppens-Hofman, Marjolein C; Terband, Hayo R; Maassen, Ben A M; van Schrojenstein Lantman-De Valk, Henny M J; van Zaalen-op't Hof, Yvonne; Snik, Ad F M

    2013-01-01

    In individuals with an intellectual disability, speech dysfluencies are more common than in the general population. In clinical practice, these fluency disorders are generally diagnosed and treated as stuttering rather than cluttering. To characterise the type of dysfluencies in adults with intellectual disabilities and reported speech difficulties with an emphasis on manifestations of stuttering and cluttering, which distinction is to help optimise treatment aimed at improving fluency and intelligibility. The dysfluencies in the spontaneous speech of 28 adults (18-40 years; 16 men) with mild and moderate intellectual disabilities (IQs 40-70), who were characterised as poorly intelligible by their caregivers, were analysed using the speech norms for typically developing adults and children. The speakers were subsequently assigned to different diagnostic categories by relating their resulting dysfluency profiles to mean articulatory rate and articulatory rate variability. Twenty-two (75%) of the participants showed clinically significant dysfluencies, of which 21% were classified as cluttering, 29% as cluttering-stuttering and 25% as clear cluttering at normal articulatory rate. The characteristic pattern of stuttering did not occur. The dysfluencies in the speech of adults with intellectual disabilities and poor intelligibility show patterns that are specific for this population. Together, the results suggest that in this specific group of dysfluent speakers interventions should be aimed at cluttering rather than stuttering. The reader will be able to (1) describe patterns of dysfluencies in the speech of adults with intellectual disabilities that are specific for this group of people, (2) explain that a high rate of dysfluencies in speech is potentially a major determiner of poor intelligibility in adults with ID and (3) describe suggestions for intervention focusing on cluttering rather than stuttering in dysfluent speakers with ID. Copyright © 2013 Elsevier Inc

  4. The impact of language co-activation on L1 and L2 speech fluency.

    Science.gov (United States)

    Bergmann, Christopher; Sprenger, Simone A; Schmid, Monika S

    2015-10-01

    Fluent speech depends on the availability of well-established linguistic knowledge and routines for speech planning and articulation. A lack of speech fluency in late second-language (L2) learners may point to a deficiency of these representations, due to incomplete acquisition. Experiments on bilingual language processing have shown, however, that there are strong reasons to believe that multilingual speakers experience co-activation of the languages they speak. We have studied to what degree language co-activation affects fluency in the speech of bilinguals, comparing a monolingual German control group with two bilingual groups: 1) first-language (L1) attriters, who have fully acquired German before emigrating to an L2 English environment, and 2) immersed L2 learners of German (L1: English). We have analysed the temporal fluency and the incidence of disfluency markers (pauses, repetitions and self-corrections) in spontaneous film retellings. Our findings show that learners to speak more slowly than controls and attriters. Also, on each count, the speech of at least one of the bilingual groups contains more disfluency markers than the retellings of the control group. Generally speaking, both bilingual groups-learners and attriters-are equally (dis)fluent and significantly more disfluent than the monolingual speakers. Given that the L1 attriters are unaffected by incomplete acquisition, we interpret these findings as evidence for language competition during speech production. Copyright © 2015. Published by Elsevier B.V.

  5. A Diagnostic Marker to Discriminate Childhood Apraxia of Speech from Speech Delay: III. Theoretical Coherence of the Pause Marker with Speech Processing Deficits in Childhood Apraxia of Speech

    Science.gov (United States)

    Shriberg, Lawrence D.; Strand, Edythe A.; Fourakis, Marios; Jakielski, Kathy J.; Hall, Sheryl D.; Karlsson, Heather B.; Mabie, Heather L.; McSweeny, Jane L.; Tilkens, Christie M.; Wilson, David L.

    2017-01-01

    Purpose: Previous articles in this supplement described rationale for and development of the pause marker (PM), a diagnostic marker of childhood apraxia of speech (CAS), and studies supporting its validity and reliability. The present article assesses the theoretical coherence of the PM with speech processing deficits in CAS. Method: PM and other…

  6. Atomic mobility in liquid and fcc Al-Si-Mg-RE (RE = Ce, Sc) alloys and its application to the simulation of solidification processes in RE-containing A357 alloys

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Zhao; Zhang, Lijun [Central South Univ., Changsha (China). State Key Lab of Powder Metallurgy; Tang, Ying [Thermo-Calc Software AB, Solna (Sweden)

    2017-06-15

    This paper first provides a critical review of experimental and theoretically-predicted diffusivities in both liquid and fcc Al-Si-Mg-RE (RE = Ce, Sc) alloys as-reported by previous researchers. The modified Sutherland equation is then employed to predict self- and impurity diffusivities in Al-Si-Mg-RE melts. The self-diffusivity of metastable fcc Sc is evaluated via the first-principles computed activation energy and semi-empirical relations. Based on the critically-reviewed and presently evaluated diffusivity information, atomic mobility descriptions for liquid and fcc phases in the Al-Si-Mg-RE systems are established by means of the Diffusion-Controlled TRAnsformation (DICTRA) software package. Comprehensive comparisons show that most of the measured and theoretically-predicted diffusivities can be reasonably reproduced by the present atomic mobility descriptions. The atomic mobility descriptions for liquid and fcc Al-Si-Mg-RE alloys are further validated by comparing the model-predicted differential scanning calorimetry curves for RE-containing A357 alloys during solidification against experimental data. Detailed analysis of the curves and microstructures in RE-free and RE-containing A357 alloys indicates that both Ce and Sc can serve as the grain refiner for A357 alloys, and that the grain refinement efficiency of Sc is much higher.

  7. Atomic mobility in liquid and fcc Al-Si-Mg-RE (RE = Ce, Sc) alloys and its application to the simulation of solidification processes in RE-containing A357 alloys

    International Nuclear Information System (INIS)

    Lu, Zhao; Zhang, Lijun

    2017-01-01

    This paper first provides a critical review of experimental and theoretically-predicted diffusivities in both liquid and fcc Al-Si-Mg-RE (RE = Ce, Sc) alloys as-reported by previous researchers. The modified Sutherland equation is then employed to predict self- and impurity diffusivities in Al-Si-Mg-RE melts. The self-diffusivity of metastable fcc Sc is evaluated via the first-principles computed activation energy and semi-empirical relations. Based on the critically-reviewed and presently evaluated diffusivity information, atomic mobility descriptions for liquid and fcc phases in the Al-Si-Mg-RE systems are established by means of the Diffusion-Controlled TRAnsformation (DICTRA) software package. Comprehensive comparisons show that most of the measured and theoretically-predicted diffusivities can be reasonably reproduced by the present atomic mobility descriptions. The atomic mobility descriptions for liquid and fcc Al-Si-Mg-RE alloys are further validated by comparing the model-predicted differential scanning calorimetry curves for RE-containing A357 alloys during solidification against experimental data. Detailed analysis of the curves and microstructures in RE-free and RE-containing A357 alloys indicates that both Ce and Sc can serve as the grain refiner for A357 alloys, and that the grain refinement efficiency of Sc is much higher.

  8. Speech and language support: How physicians can identify and treat speech and language delays in the office setting.

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society's Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children's speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation.

  9. An exploratory study on the driving method of speech synthesis based on the human eye reading imaging data

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2016-10-01

    With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.

  10. Abortion and compelled physician speech.

    Science.gov (United States)

    Orentlicher, David

    2015-01-01

    Informed consent mandates for abortion providers may infringe the First Amendment's freedom of speech. On the other hand, they may reinforce the physician's duty to obtain informed consent. Courts can promote both doctrines by ensuring that compelled physician speech pertains to medical facts about abortion rather than abortion ideology and that compelled speech is truthful and not misleading. © 2015 American Society of Law, Medicine & Ethics, Inc.

  11. Speech enhancement

    CERN Document Server

    Benesty, Jacob; Chen, Jingdong

    2006-01-01

    We live in a noisy world! In all applications (telecommunications, hands-free communications, recording, human-machine interfaces, etc.) that require at least one microphone, the signal of interest is usually contaminated by noise and reverberation. As a result, the microphone signal has to be ""cleaned"" with digital signal processing tools before it is played out, transmitted, or stored.This book is about speech enhancement. Different well-known and state-of-the-art methods for noise reduction, with one or multiple microphones, are discussed. By speech enhancement, we mean not only noise red

  12. Real-time speech-driven animation of expressive talking faces

    Science.gov (United States)

    Liu, Jia; You, Mingyu; Chen, Chun; Song, Mingli

    2011-05-01

    In this paper, we present a real-time facial animation system in which speech drives mouth movements and facial expressions synchronously. Considering five basic emotions, a hierarchical structure with an upper layer of emotion classification is established. Based on the recognized emotion label, the under-layer classification at sub-phonemic level has been modelled on the relationship between acoustic features of frames and audio labels in phonemes. Using certain constraint, the predicted emotion labels of speech are adjusted to gain the facial expression labels which are combined with sub-phonemic labels. The combinations are mapped into facial action units (FAUs), and audio-visual synchronized animation with mouth movements and facial expressions is generated by morphing between FAUs. The experimental results demonstrate that the two-layer structure succeeds in both emotion and sub-phonemic classifications, and the synthesized facial sequences reach a comparative convincing quality.

  13. Effect of speech rate variation on acoustic phone stability in Afrikaans speech recognition

    CSIR Research Space (South Africa)

    Badenhorst, JAC

    2007-11-01

    Full Text Available The authors analyse the effect of speech rate variation on Afrikaans phone stability from an acoustic perspective. Specifically they introduce two techniques for the acoustic analysis of speech rate variation, apply these techniques to an Afrikaans...

  14. Speech, "Inner Speech," and the Development of Short-Term Memory: Effects of Picture-Labeling on Recall.

    Science.gov (United States)

    Hitch, Graham J.; And Others

    1991-01-01

    Reports on experiments to determine effects of overt speech on children's use of inner speech in short-term memory. Word length and phonemic similarity had greater effects on older children and when pictures were labeled at presentation. Suggests that speaking or listening to speech activates an internal articulatory loop. (Author/GH)

  15. Phonetic recalibration of speech by text

    NARCIS (Netherlands)

    Keetels, M.N.; Schakel, L.; de Bonte, M.; Vroomen, J.

    2016-01-01

    Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic recalibration). Previous studies have shown that lipread speech (and word knowledge) can adjust the perception of ambiguous speech and can induce phonetic adjustments (Bertelson, Vroomen, & de Gelder in

  16. Epoch-based analysis of speech signals

    Indian Academy of Sciences (India)

    on speech production characteristics, but also helps in accurate analysis of speech. .... include time delay estimation, speech enhancement from single and multi- ...... log. (. E[k]. ∑K−1 l=0. E[l]. ) ,. (7) where K is the number of samples in the ...

  17. Automatic Speech Recognition Systems for the Evaluation of Voice and Speech Disorders in Head and Neck Cancer

    OpenAIRE

    Andreas Maier; Tino Haderlein; Florian Stelzle; Elmar Nöth; Emeka Nkenke; Frank Rosanowski; Anne Schützenberger; Maria Schuster

    2010-01-01

    In patients suffering from head and neck cancer, speech intelligibility is often restricted. For assessment and outcome measurements, automatic speech recognition systems have previously been shown to be appropriate for objective and quick evaluation of intelligibility. In this study we investigate the applicability of the method to speech disorders caused by head and neck cancer. Intelligibility was quantified by speech recognition on recordings of a standard text read by 41 German laryngect...

  18. Nobel peace speech

    Directory of Open Access Journals (Sweden)

    Joshua FRYE

    2017-07-01

    Full Text Available The Nobel Peace Prize has long been considered the premier peace prize in the world. According to Geir Lundestad, Secretary of the Nobel Committee, of the 300 some peace prizes awarded worldwide, “none is in any way as well known and as highly respected as the Nobel Peace Prize” (Lundestad, 2001. Nobel peace speech is a unique and significant international site of public discourse committed to articulating the universal grammar of peace. Spanning over 100 years of sociopolitical history on the world stage, Nobel Peace Laureates richly represent an important cross-section of domestic and international issues increasingly germane to many publics. Communication scholars’ interest in this rhetorical genre has increased in the past decade. Yet, the norm has been to analyze a single speech artifact from a prestigious or controversial winner rather than examine the collection of speeches for generic commonalities of import. In this essay, we analyze the discourse of Nobel peace speech inductively and argue that the organizing principle of the Nobel peace speech genre is the repetitive form of normative liberal principles and values that function as rhetorical topoi. These topoi include freedom and justice and appeal to the inviolable, inborn right of human beings to exercise certain political and civil liberties and the expectation of equality of protection from totalitarian and tyrannical abuses. The significance of this essay to contemporary communication theory is to expand our theoretical understanding of rhetoric’s role in the maintenance and development of an international and cross-cultural vocabulary for the grammar of peace.

  19. Instability in newly-established wetlands? Trajectories of floristic change in the re-flooded Hula peatland, northern Israel

    Directory of Open Access Journals (Sweden)

    D. Kaplan

    2012-01-01

    Full Text Available Drainage of the 6,000 ha Hula Lake and peatland in northern Israel in the late 1950s caused the loss of a very diverse and rare ecosystem and an important phytogeographic meeting zone for Holarctic and Palaeotropical species. Draining the Hula peatland was only partially successful in creating a large fertile area for cultivation, and in 1994 this led the authorities to re-flood 100 ha of the valley—the Agamon (Agmon—with the aim of rehabilitating the diverse wetland landscape, promoting ecotourism and creating a clear-water body that would contribute to the purification of Lake Kinneret. The vegetation of the restored wetland was monitored for ten years (1997–2006, recording the establishment and abundance of vascular plant species. More than 20 emergent, submerged and riparian species became established. Like a number of other shallow-water wetlands, the Agmon is characterised by considerable ecological fluctuations. This has been expressed in prominent floristic changes in the Agamon since it was created. An increased abundance of Ceratophyllum demersum and Najas minor and a decline in Potamogeton spp., Najas delilei and filamentous algae have been observed. A long-term decline in water level and sediment accumulation has brought about a significant rise in the incidence of Phragmites australis, Typha domingensis and Ludwigia stolonifera in the south-eastern area. A GIS analysis of changes in species dominance shows fluctuations over the years, with only a partial trend of succession towards a P. australis, T. domingensis and L. stolonifera community.

  20. Effectiveness of prescribed fire to re-establish sagebrush vegetation and ecohydrologic function on woodland-encroached sagebrush steppe, Great Basin, USA

    Science.gov (United States)

    Williams, C. J.; Pierson, F. B.; Kormos, P.; Al-Hamdan, O. Z.; Nouwakpo, S.; Weltz, M.; Vega, S.; Lindsay, K.

    2017-12-01

    Range expansion of pinyon (Pinus spp.) and juniper (Juniperus spp.) conifers into sagebrush steppe (Artemisia spp.) communities has imperiled a vast domain in the western US. Encroachment of sagebrush ecosystems by pinyon and juniper conifers has negative ramifications to ecosystem structure and function and delivery of goods and services. Scientists, land management agencies, and private land owners throughout the western US are challenged with selecting from a suite of options to reduce pinyon and juniper woody fuels and re-establish sagebrush steppe structure and function. This study evaluated the effectiveness of prescribed fire to re-establish sagebrush vegetation and ecohydrologic function over a 9 yr period. Nine years post-fire hydrologic and erosion responses reflect the combination of pre-fire site conditions, perennial grass recruitment, delayed litter cover, and inherent site characteristics. Burning initially increased bare ground, runoff, and erosion for well-vegetated areas underneath tree and shrub canopies, but had minimal impact on hydrology and erosion for degraded interspaces between plants. The degraded interspaces were primarily bare ground and exhibited high runoff and erosion rates prior to burning. Initial fire effects persisted for two years, but increased productivity of grasses improved hydrologic function of interspaces over the full 9 yr period. At the hillslope scale, grass recruitment in the intercanopy between trees reduced runoff from rainsplash, sheetflow, and concentrated overland flow at one site, but did not reduce the high levels of runoff and erosion from a more degraded site. In areas formerly occupied by trees (tree zones), burning increased invasive annual grass cover due to fire removal of limited native perennial plants and competition for resources. The invasive annual grass cover had no net effect on runoff and erosion from tree zones however. Runoff and erosion increased in tree zones at the more degraded site due to

  1. Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.

    Science.gov (United States)

    Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao

    2013-01-01

    Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  2. Re-Operationalizing Established Groups in Brainstorming: Validating Osborn's Claims

    Science.gov (United States)

    Levine, Kenneth J.; Heuett, Kyle B.; Reno, Katie M.

    2017-01-01

    Since the introduction of brainstorming as an idea-generation technique to address organizational problems, researchers have struggled to replicate some of the claims around the technique. One major concern has been the differences in the number of ideas generated between established groups as found in industry versus the non-established groups…

  3. Speech-Language Dissociations, Distractibility, and Childhood Stuttering

    Science.gov (United States)

    Conture, Edward G.; Walden, Tedra A.; Lambert, Warren E.

    2015-01-01

    Purpose This study investigated the relation among speech-language dissociations, attentional distractibility, and childhood stuttering. Method Participants were 82 preschool-age children who stutter (CWS) and 120 who do not stutter (CWNS). Correlation-based statistics (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003) identified dissociations across 5 norm-based speech-language subtests. The Behavioral Style Questionnaire Distractibility subscale measured attentional distractibility. Analyses addressed (a) between-groups differences in the number of children exhibiting speech-language dissociations; (b) between-groups distractibility differences; (c) the relation between distractibility and speech-language dissociations; and (d) whether interactions between distractibility and dissociations predicted the frequency of total, stuttered, and nonstuttered disfluencies. Results More preschool-age CWS exhibited speech-language dissociations compared with CWNS, and more boys exhibited dissociations compared with girls. In addition, male CWS were less distractible than female CWS and female CWNS. For CWS, but not CWNS, less distractibility (i.e., greater attention) was associated with more speech-language dissociations. Last, interactions between distractibility and dissociations did not predict speech disfluencies in CWS or CWNS. Conclusions The present findings suggest that for preschool-age CWS, attentional processes are associated with speech-language dissociations. Future investigations are warranted to better understand the directionality of effect of this association (e.g., inefficient attentional processes → speech-language dissociations vs. inefficient attentional processes ← speech-language dissociations). PMID:26126203

  4. International aspirations for speech-language pathologists' practice with multilingual children with speech sound disorders: development of a position paper.

    Science.gov (United States)

    McLeod, Sharynne; Verdon, Sarah; Bowen, Caroline

    2013-01-01

    A major challenge for the speech-language pathology profession in many cultures is to address the mismatch between the "linguistic homogeneity of the speech-language pathology profession and the linguistic diversity of its clientele" (Caesar & Kohler, 2007, p. 198). This paper outlines the development of the Multilingual Children with Speech Sound Disorders: Position Paper created to guide speech-language pathologists' (SLPs') facilitation of multilingual children's speech. An international expert panel was assembled comprising 57 researchers (SLPs, linguists, phoneticians, and speech scientists) with knowledge about multilingual children's speech, or children with speech sound disorders. Combined, they had worked in 33 countries and used 26 languages in professional practice. Fourteen panel members met for a one-day workshop to identify key points for inclusion in the position paper. Subsequently, 42 additional panel members participated online to contribute to drafts of the position paper. A thematic analysis was undertaken of the major areas of discussion using two data sources: (a) face-to-face workshop transcript (133 pages) and (b) online discussion artifacts (104 pages). Finally, a moderator with international expertise in working with children with speech sound disorders facilitated the incorporation of the panel's recommendations. The following themes were identified: definitions, scope, framework, evidence, challenges, practices, and consideration of a multilingual audience. The resulting position paper contains guidelines for providing services to multilingual children with speech sound disorders (http://www.csu.edu.au/research/multilingual-speech/position-paper). The paper is structured using the International Classification of Functioning, Disability and Health: Children and Youth Version (World Health Organization, 2007) and incorporates recommendations for (a) children and families, (b) SLPs' assessment and intervention, (c) SLPs' professional

  5. Free Speech. No. 38.

    Science.gov (United States)

    Kane, Peter E., Ed.

    This issue of "Free Speech" contains the following articles: "Daniel Schoor Relieved of Reporting Duties" by Laurence Stern, "The Sellout at CBS" by Michael Harrington, "Defending Dan Schorr" by Tome Wicker, "Speech to the Washington Press Club, February 25, 1976" by Daniel Schorr, "Funds…

  6. APPRECIATING SPEECH THROUGH GAMING

    Directory of Open Access Journals (Sweden)

    Mario T Carreon

    2014-06-01

    Full Text Available This paper discusses the Speech and Phoneme Recognition as an Educational Aid for the Deaf and Hearing Impaired (SPREAD application and the ongoing research on its deployment as a tool for motivating deaf and hearing impaired students to learn and appreciate speech. This application uses the Sphinx-4 voice recognition system to analyze the vocalization of the student and provide prompt feedback on their pronunciation. The packaging of the application as an interactive game aims to provide additional motivation for the deaf and hearing impaired student through visual motivation for them to learn and appreciate speech.

  7. Global Freedom of Speech

    DEFF Research Database (Denmark)

    Binderup, Lars Grassme

    2007-01-01

    , as opposed to a legal norm, that curbs exercises of the right to free speech that offend the feelings or beliefs of members from other cultural groups. The paper rejects the suggestion that acceptance of such a norm is in line with liberal egalitarian thinking. Following a review of the classical liberal...... egalitarian reasons for free speech - reasons from overall welfare, from autonomy and from respect for the equality of citizens - it is argued that these reasons outweigh the proposed reasons for curbing culturally offensive speech. Currently controversial cases such as that of the Danish Cartoon Controversy...

  8. Extensions to the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    This report describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). Part I describes a classification extension to the SDCS to differentiate motor speech disorders from speech delay and to differentiate among three sub-types of motor speech disorders.…

  9. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    Science.gov (United States)

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  10. Freedom of racist speech: Ego and expressive threats.

    Science.gov (United States)

    White, Mark H; Crandall, Christian S

    2017-09-01

    Do claims of "free speech" provide cover for prejudice? We investigate whether this defense of racist or hate speech serves as a justification for prejudice. In a series of 8 studies (N = 1,624), we found that explicit racial prejudice is a reliable predictor of the "free speech defense" of racist expression. Participants endorsed free speech values for singing racists songs or posting racist comments on social media; people high in prejudice endorsed free speech more than people low in prejudice (meta-analytic r = .43). This endorsement was not principled-high levels of prejudice did not predict endorsement of free speech values when identical speech was directed at coworkers or the police. Participants low in explicit racial prejudice actively avoided endorsing free speech values in racialized conditions compared to nonracial conditions, but participants high in racial prejudice increased their endorsement of free speech values in racialized conditions. Three experiments failed to find evidence that defense of racist speech by the highly prejudiced was based in self-relevant or self-protective motives. Two experiments found evidence that the free speech argument protected participants' own freedom to express their attitudes; the defense of other's racist speech seems motivated more by threats to autonomy than threats to self-regard. These studies serve as an elaboration of the Justification-Suppression Model (Crandall & Eshleman, 2003) of prejudice expression. The justification of racist speech by endorsing fundamental political values can serve to buffer racial and hate speech from normative disapproval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Rehabilitating a patient with bruxism-associated tooth tissue loss: a literature review and case report.

    Science.gov (United States)

    Yip, Kevin Hak-Kong; Chow, Tak W; Chu, Frederick C S

    2003-01-01

    Tooth tissue loss from bruxism has been demonstrated to be associated with various dental problems such as tooth sensitivity, excessive reduction of clinical crown height, and possible changes of occlusal relationship. A literature search revealed a number of treatment modalities, with an emphasis on prevention and rehabilitation with adhesive techniques. Rehabilitating a patient with bruxism-associated tooth tissue loss to an acceptable standard of oral health is clinically demanding and requires careful diagnosis and proper treatment planning. This article describes the management of excessive tooth tissue loss in a 43-year-old woman with a history of bruxism. The occlusal vertical dimension of the patient was re-established with the use of an acrylic maxillary occlusal splint, followed by resin composite build-up. Full-mouth oral rehabilitation ultimately involved constructing multiple porcelain veneers, adhesive gold onlays, ceramo-metal crowns, and fixed partial dentures.

  12. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Science.gov (United States)

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  13. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    International Nuclear Information System (INIS)

    Holzrichter, J.F.; Ng, L.C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs

  14. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    Science.gov (United States)

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  15. Speech and language support: How physicians can identify and treat speech and language delays in the office setting

    Science.gov (United States)

    Moharir, Madhavi; Barnett, Noel; Taras, Jillian; Cole, Martha; Ford-Jones, E Lee; Levin, Leo

    2014-01-01

    Failure to recognize and intervene early in speech and language delays can lead to multifaceted and potentially severe consequences for early child development and later literacy skills. While routine evaluations of speech and language during well-child visits are recommended, there is no standardized (office) approach to facilitate this. Furthermore, extensive wait times for speech and language pathology consultation represent valuable lost time for the child and family. Using speech and language expertise, and paediatric collaboration, key content for an office-based tool was developed. The tool aimed to help physicians achieve three main goals: early and accurate identification of speech and language delays as well as children at risk for literacy challenges; appropriate referral to speech and language services when required; and teaching and, thus, empowering parents to create rich and responsive language environments at home. Using this tool, in combination with the Canadian Paediatric Society’s Read, Speak, Sing and Grow Literacy Initiative, physicians will be better positioned to offer practical strategies to caregivers to enhance children’s speech and language capabilities. The tool represents a strategy to evaluate speech and language delays. It depicts age-specific linguistic/phonetic milestones and suggests interventions. The tool represents a practical interim treatment while the family is waiting for formal speech and language therapy consultation. PMID:24627648

  16. Application of wavelets in speech processing

    CERN Document Server

    Farouk, Mohamed Hesham

    2014-01-01

    This book provides a survey on wide-spread of employing wavelets analysis  in different applications of speech processing. The author examines development and research in different application of speech processing. The book also summarizes the state of the art research on wavelet in speech processing.

  17. Recent advances in nonlinear speech processing

    CERN Document Server

    Faundez-Zanuy, Marcos; Esposito, Antonietta; Cordasco, Gennaro; Drugman, Thomas; Solé-Casals, Jordi; Morabito, Francesco

    2016-01-01

    This book presents recent advances in nonlinear speech processing beyond nonlinear techniques. It shows that it exploits heuristic and psychological models of human interaction in order to succeed in the implementations of socially believable VUIs and applications for human health and psychological support. The book takes into account the multifunctional role of speech and what is “outside of the box” (see Björn Schuller’s foreword). To this aim, the book is organized in 6 sections, each collecting a small number of short chapters reporting advances “inside” and “outside” themes related to nonlinear speech research. The themes emphasize theoretical and practical issues for modelling socially believable speech interfaces, ranging from efforts to capture the nature of sound changes in linguistic contexts and the timing nature of speech; labors to identify and detect speech features that help in the diagnosis of psychological and neuronal disease, attempts to improve the effectiveness and performa...

  18. Speech and non-speech processing in children with phonological disorders: an electrophysiological study

    Directory of Open Access Journals (Sweden)

    Isabela Crivellaro Gonçalves

    2011-01-01

    Full Text Available OBJECTIVE: To determine whether neurophysiological auditory brainstem responses to clicks and repeated speech stimuli differ between typically developing children and children with phonological disorders. INTRODUCTION: Phonological disorders are language impairments resulting from inadequate use of adult phonological language rules and are among the most common speech and language disorders in children (prevalence: 8 - 9%. Our hypothesis is that children with phonological disorders have basic differences in the way that their brains encode acoustic signals at brainstem level when compared to normal counterparts. METHODS: We recorded click and speech evoked auditory brainstem responses in 18 typically developing children (control group and in 18 children who were clinically diagnosed with phonological disorders (research group. The age range of the children was from 7-11 years. RESULTS: The research group exhibited significantly longer latency responses to click stimuli (waves I, III and V and speech stimuli (waves V and A when compared to the control group. DISCUSSION: These results suggest that the abnormal encoding of speech sounds may be a biological marker of phonological disorders. However, these results cannot define the biological origins of phonological problems. We also observed that speech-evoked auditory brainstem responses had a higher specificity/sensitivity for identifying phonological disorders than click-evoked auditory brainstem responses. CONCLUSIONS: Early stages of the auditory pathway processing of an acoustic stimulus are not similar in typically developing children and those with phonological disorders. These findings suggest that there are brainstem auditory pathway abnormalities in children with phonological disorders.

  19. Conflict monitoring in speech processing : An fMRI study of error detection in speech production and perception

    NARCIS (Netherlands)

    Gauvin, Hanna; De Baene, W.; Brass, Marcel; Hartsuiker, Robert

    2016-01-01

    To minimize the number of errors in speech, and thereby facilitate communication, speech is monitored before articulation. It is, however, unclear at which level during speech production monitoring takes place, and what mechanisms are used to detect and correct errors. The present study investigated

  20. Religious Speech in the Military: Freedoms and Limitations

    Science.gov (United States)

    2011-01-01

    abridging the freedom of speech .” Speech is construed broadly and includes both oral and written speech, as well as expressive conduct and displays when...intended to convey a message that is likely to be understood.7 Religious speech is certainly included. As a bedrock constitutional right, freedom of speech has...to good order and discipline or of a nature to bring discredit upon the armed forces)—the First Amendment’s freedom of speech will not provide them

  1. Perceived Speech Quality Estimation Using DTW Algorithm

    Directory of Open Access Journals (Sweden)

    S. Arsenovski

    2009-06-01

    Full Text Available In this paper a method for speech quality estimation is evaluated by simulating the transfer of speech over packet switched and mobile networks. The proposed system uses Dynamic Time Warping algorithm for test and received speech comparison. Several tests have been made on a test speech sample of a single speaker with simulated packet (frame loss effects on the perceived speech. The achieved results have been compared with measured PESQ values on the used transmission channel and their correlation has been observed.

  2. Audiovisual Asynchrony Detection in Human Speech

    Science.gov (United States)

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  3. Detection of target phonemes in spontaneous and read speech.

    Science.gov (United States)

    Mehta, G; Cutler, A

    1988-01-01

    Although spontaneous speech occurs more frequently in most listeners' experience than read speech, laboratory studies of human speech recognition typically use carefully controlled materials read from a script. The phonological and prosodic characteristics of spontaneous and read speech differ considerably, however, which suggests that laboratory results may not generalise to the recognition of spontaneous speech. In the present study listeners were presented with both spontaneous and read speech materials, and their response time to detect word-initial target phonemes was measured. Responses were, overall, equally fast in each speech mode. However, analysis of effects previously reported in phoneme detection studies revealed significant differences between speech modes. In read speech but not in spontaneous speech, later targets were detected more rapidly than targets preceded by short words. In contrast, in spontaneous speech but not in read speech, targets were detected more rapidly in accented than in unaccented words and in strong than in weak syllables. An explanation for this pattern is offered in terms of characteristic prosodic differences between spontaneous and read speech. The results support claims from previous work that listeners pay great attention to prosodic information in the process of recognising speech.

  4. Voice Activity Detection. Fundamentals and Speech Recognition System Robustness

    OpenAIRE

    Ramirez, J.; Gorriz, J. M.; Segura, J. C.

    2007-01-01

    This chapter has shown an overview of the main challenges in robust speech detection and a review of the state of the art and applications. VADs are frequently used in a number of applications including speech coding, speech enhancement and speech recognition. A precise VAD extracts a set of discriminative speech features from the noisy speech and formulates the decision in terms of well defined rule. The chapter has summarized three robust VAD methods that yield high speech/non-speech discri...

  5. Religion, hate speech, and non-domination

    OpenAIRE

    Bonotti, Matteo

    2017-01-01

    In this paper I argue that one way of explaining what is wrong with hate speech is by critically assessing what kind of freedom free speech involves and, relatedly, what kind of freedom hate speech undermines. More specifically, I argue that the main arguments for freedom of speech (e.g. from truth, from autonomy, and from democracy) rely on a “positive” conception of freedom intended as autonomy and self-mastery (Berlin, 2006), and can only partially help us to understand what is wrong with ...

  6. Modelling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Jørgensen and Dau (J Acoust Soc Am 130:1475-1487, 2011) proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII) in conditions with nonlinearly processed speech...... subjected to phase jitter, a condition in which the spectral structure of the intelligibility of speech signal is strongly affected, while the broadband temporal envelope is kept largely intact. In contrast, the effects of this distortion can be predicted -successfully by the spectro-temporal modulation...... suggest that the SNRenv might reflect a powerful decision metric, while some explicit across-frequency analysis seems crucial in some conditions. How such across-frequency analysis is "realized" in the auditory system remains unresolved....

  7. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  8. Microscopic prediction of speech intelligibility in spatially distributed speech-shaped noise for normal-hearing listeners.

    Science.gov (United States)

    Geravanchizadeh, Masoud; Fallah, Ali

    2015-12-01

    A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.

  9. Prediction and constraint in audiovisual speech perception

    Science.gov (United States)

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported

  10. Prediction and constraint in audiovisual speech perception.

    Science.gov (United States)

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration

  11. Regulation of speech in multicultural societies

    NARCIS (Netherlands)

    Maussen, M.; Grillo, R.

    2015-01-01

    This book focuses on the way in which public debate and legal practice intersect when it comes to the value of free speech and the need to regulate "offensive", "blasphemous" or "hate" speech, especially, though not exclusively where such speech is thought to be offensive to members of ethnic and

  12. ACOUSTIC SPEECH RECOGNITION FOR MARATHI LANGUAGE USING SPHINX

    Directory of Open Access Journals (Sweden)

    Aman Ankit

    2016-09-01

    Full Text Available Speech recognition or speech to text processing, is a process of recognizing human speech by the computer and converting into text. In speech recognition, transcripts are created by taking recordings of speech as audio and their text transcriptions. Speech based applications which include Natural Language Processing (NLP techniques are popular and an active area of research. Input to such applications is in natural language and output is obtained in natural language. Speech recognition mostly revolves around three approaches namely Acoustic phonetic approach, Pattern recognition approach and Artificial intelligence approach. Creation of acoustic model requires a large database of speech and training algorithms. The output of an ASR system is recognition and translation of spoken language into text by computers and computerized devices. ASR today finds enormous application in tasks that require human machine interfaces like, voice dialing, and etc. Our key contribution in this paper is to create corpora for Marathi language and explore the use of Sphinx engine for automatic speech recognition

  13. Data-driven analysis of functional brain interactions during free listening to music and speech.

    Science.gov (United States)

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  14. Is Birdsong More Like Speech or Music?

    Science.gov (United States)

    Shannon, Robert V

    2016-04-01

    Music and speech share many acoustic cues but not all are equally important. For example, harmonic pitch is essential for music but not for speech. When birds communicate is their song more like speech or music? A new study contrasting pitch and spectral patterns shows that birds perceive their song more like humans perceive speech. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Effect of "developmental speech and language training through music" on speech production in children with autism spectrum disorders.

    Science.gov (United States)

    Lim, Hayoung A

    2010-01-01

    The study compared the effect of music training, speech training and no-training on the verbal production of children with Autism Spectrum Disorders (ASD). Participants were 50 children with ASD, age range 3 to 5 years, who had previously been evaluated on standard tests of language and level of functioning. They were randomly assigned to one of three 3-day conditions. Participants in music training (n = 18) watched a music video containing 6 songs and pictures of the 36 target words; those in speech training (n = 18) watched a speech video containing 6 stories and pictures, and those in the control condition (n = 14) received no treatment. Participants' verbal production including semantics, phonology, pragmatics, and prosody was measured by an experimenter designed verbal production evaluation scale. Results showed that participants in both music and speech training significantly increased their pre to posttest verbal production. Results also indicated that both high and low functioning participants improved their speech production after receiving either music or speech training; however, low functioning participants showed a greater improvement after the music training than the speech training. Children with ASD perceive important linguistic information embedded in music stimuli organized by principles of pattern perception, and produce the functional speech.

  16. Current trends in outcome studies for children with hearing loss and the need to establish a comprehensive framework of measuring outcomes in children with hearing loss in China

    Directory of Open Access Journals (Sweden)

    Xueman Liu

    2016-06-01

    Full Text Available Since the 1970s, outcome studies for children with hearing loss expanded from focusing on assessing auditory awareness and speech perception skills to evaluating language and speech development. Since the early 2000s, the multi-center large scale research systematically studied outcomes in the areas of auditory awareness, speech-perception, language development, speech development, educational achievements, cognitive development, and psychosocial development. These studies advocated the establishment of baseline and regular follow-up evaluations with a comprehensive framework centered on language development. Recent research interests also include understanding the vast differences in outcomes for children with hearing loss, understanding the relationships between neurocognitive development and language acquisition in children with hearing loss, and using outcome studies to guide evidence-based clinical practice. After the establishment of standardized Mandarin language assessments, outcomes research in Mainland China has the potential to expand beyond auditory awareness and speech perception studies.

  17. Atypical speech lateralization in adults with developmental coordination disorder demonstrated using functional transcranial Doppler ultrasound.

    Science.gov (United States)

    Hodgson, Jessica C; Hudson, John M

    2017-03-01

    Research using clinical populations to explore the relationship between hemispheric speech lateralization and handedness has focused on individuals with speech and language disorders, such as dyslexia or specific language impairment (SLI). Such work reveals atypical patterns of cerebral lateralization and handedness in these groups compared to controls. There are few studies that examine this relationship in people with motor coordination impairments but without speech or reading deficits, which is a surprising omission given the prevalence of theories suggesting a common neural network underlying both functions. We use an emerging imaging technique in cognitive neuroscience; functional transcranial Doppler (fTCD) ultrasound, to assess whether individuals with developmental coordination disorder (DCD) display reduced left-hemisphere lateralization for speech production compared to control participants. Twelve adult control participants and 12 adults with DCD, but no other developmental/cognitive impairments, performed a word-generation task whilst undergoing fTCD imaging to establish a hemispheric lateralization index for speech production. All participants also completed an electronic peg-moving task to determine hand skill. As predicted, the DCD group showed a significantly reduced left lateralization pattern for the speech production task compared to controls. Performance on the motor skill task showed a clear preference for the dominant hand across both groups; however, the DCD group mean movement times were significantly higher for the non-dominant hand. This is the first study of its kind to assess hand skill and speech lateralization in DCD. The results reveal a reduced leftwards asymmetry for speech and a slower motor performance. This fits alongside previous work showing atypical cerebral lateralization in DCD for other cognitive processes (e.g., executive function and short-term memory) and thus speaks to debates on theories of the links between motor

  18. Speech Synthesis Applied to Language Teaching.

    Science.gov (United States)

    Sherwood, Bruce

    1981-01-01

    The experimental addition of speech output to computer-based Esperanto lessons using speech synthesized from text is described. Because of Esperanto's phonetic spelling and simple rhythm, it is particularly easy to describe the mechanisms of Esperanto synthesis. Attention is directed to how the text-to-speech conversion is performed and the ways…

  19. Private Speech in Ballet

    Science.gov (United States)

    Johnston, Dale

    2006-01-01

    Authoritarian teaching practices in ballet inhibit the use of private speech. This paper highlights the critical importance of private speech in the cognitive development of young ballet students, within what is largely a non-verbal art form. It draws upon research by Russian psychologist Lev Vygotsky and contemporary socioculturalists, to…

  20. Music training and speech perception: a gene-environment interaction.

    Science.gov (United States)

    Schellenberg, E Glenn

    2015-03-01

    Claims of beneficial side effects of music training are made for many different abilities, including verbal and visuospatial abilities, executive functions, working memory, IQ, and speech perception in particular. Such claims assume that music training causes the associations even though children who take music lessons are likely to differ from other children in music aptitude, which is associated with many aspects of speech perception. Music training in childhood is also associated with cognitive, personality, and demographic variables, and it is well established that IQ and personality are determined largely by genetics. Recent evidence also indicates that the role of genetics in music aptitude and music achievement is much larger than previously thought. In short, music training is an ideal model for the study of gene-environment interactions but far less appropriate as a model for the study of plasticity. Children seek out environments, including those with music lessons, that are consistent with their predispositions; such environments exaggerate preexisting individual differences. © 2015 New York Academy of Sciences.

  1. Pulse frequency in pulsed brachytherapy based on tissue repair kinetics

    International Nuclear Information System (INIS)

    Sminia, Peter; Schneider, Christoph J.; Koedooder, Kees; Tienhoven, Geertjan van; Blank, Leo E.C.M.; Gonzalez Gonzalez, Dionisio

    1998-01-01

    Purpose: Investigation of normal tissue sparing in pulsed brachytherapy (PB) relative to continuous low-dose rate irradiation (CLDR) by adjusting pulse frequency based on tissue repair characteristics. Method: Using the linear quadratic model, the relative effectiveness (RE) of a 20 Gy boost was calculated for tissue with an α/β ratio ranging from 2 to 10 Gy and a half-time of sublethal damage repair between 0.1 and 3 h. The boost dose was considered to be delivered either in a number of pulses varying from 2 to 25, or continuously at a dose rate of 0.50, 0.80, or 1.20 Gy/h. Results: The RE of 20 Gy was found to be identical for PB in 25 pulses of 0.80 Gy each h and CLDR delivered at 0.80 Gy/h for any α/β value and for a repair half-time > 0.75 h. When normal tissue repair half-times are assumed to be longer than tumor repair half-times, normal tissue sparing can be obtained, within the restriction of a fixed overall treatment time, with higher dose per pulse and longer period time (time elapsed between start of pulse n and start of pulse n + 1). An optimum relative normal tissue sparing larger than 10% was found with 4 pulses of 5 Gy every 8 h. Hence, a therapeutic gain might be obtained when changing from CLDR to PB by adjusting the physical dose in such a way that the biological dose on the tumor is maintained. The normal tissue-sparing phenomenon can be explained by an increase in RE with longer period time for tissue with high α/β ratio and fast or intermediate repair half-time, and the RE for tissue with low α/β ratio and long repair half-time remains almost constant. Conclusion: Within the benchmark of the LQ model, advantage in normal tissue-sparing is expected when matching the pulse frequency to the repair kinetics of the normal tissue exposed. A period time longer than 1 h may lead to a reduction of late normal tissue complications. This theoretical advantage emphasizes the need for better knowledge of human tissue-repair kinetics

  2. Speech versus singing: Infants choose happier sounds

    Directory of Open Access Journals (Sweden)

    Marieve eCorbeil

    2013-06-01

    Full Text Available Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants’ attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech versus hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children’s song spoken versus sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children’s song versus a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing was the principal contributor to infant attention, regardless of age.

  3. Digitized Ethnic Hate Speech: Understanding Effects of Digital Media Hate Speech on Citizen Journalism in Kenya

    Directory of Open Access Journals (Sweden)

    Stephen Gichuhi Kimotho

    2016-06-01

    Full Text Available Ethnicity in Kenya permeates all spheres of life. However, it is in politics that ethnicity is most visible. Election time in Kenya often leads to ethnic competition and hatred, often expressed through various media. Ethnic hate speech characterized the 2007 general elections in party rallies and through text messages, emails, posters and leaflets. This resulted in widespread skirmishes that left over 1200 people dead, and many displaced (KNHRC, 2008. In 2013, however, the new battle zone was the war of words on social media platform. More than any other time in Kenyan history, Kenyans poured vitriolic ethnic hate speech through digital media like Facebook, tweeter and blogs. Although scholars have studied the role and effects of the mainstream media like television and radio in proliferating the ethnic hate speech in Kenya (Michael Chege, 2008; Goldstein & Rotich, 2008a; Ismail & Deane, 2008; Jacqueline Klopp & Prisca Kamungi, 2007, little has been done in regard to social media.  This paper investigated the nature of digitized hate speech by: describing the forms of ethnic hate speech on social media in Kenya; the effects of ethnic hate speech on Kenyan’s perception of ethnic entities; ethnic conflict and ethics of citizen journalism. This study adopted a descriptive interpretive design, and utilized Austin’s Speech Act Theory, which explains use of language to achieve desired purposes and direct behaviour (Tarhom & Miracle, 2013. Content published between January and April 2013 from six purposefully identified blogs was analysed. Questionnaires were used to collect data from university students as they form a good sample of Kenyan population, are most active on social media and are drawn from all parts of the country. Qualitative data were analysed using NVIVO 10 software, while responses from the questionnaire were analysed using IBM SPSS version 21. The findings indicated that Facebook and Twitter were the main platforms used to

  4. Speech and nonspeech: What are we talking about?

    Science.gov (United States)

    Maas, Edwin

    2017-08-01

    Understanding of the behavioural, cognitive and neural underpinnings of speech production is of interest theoretically, and is important for understanding disorders of speech production and how to assess and treat such disorders in the clinic. This paper addresses two claims about the neuromotor control of speech production: (1) speech is subserved by a distinct, specialised motor control system and (2) speech is holistic and cannot be decomposed into smaller primitives. Both claims have gained traction in recent literature, and are central to a task-dependent model of speech motor control. The purpose of this paper is to stimulate thinking about speech production, its disorders and the clinical implications of these claims. The paper poses several conceptual and empirical challenges for these claims - including the critical importance of defining speech. The emerging conclusion is that a task-dependent model is called into question as its two central claims are founded on ill-defined and inconsistently applied concepts. The paper concludes with discussion of methodological and clinical implications, including the potential utility of diadochokinetic (DDK) tasks in assessment of motor speech disorders and the contraindication of nonspeech oral motor exercises to improve speech function.

  5. Noise-robust speech triage.

    Science.gov (United States)

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  6. Specialization in audiovisual speech perception: a replication study

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Andersen, Tobias

    Speech perception is audiovisual as evidenced by bimodal integration in the McGurk effect. This integration effect may be specific to speech or be applied to all stimuli in general. To investigate this, Tuomainen et al. (2005) used sine-wave speech, which naïve observers may perceive as non......-speech, but hear as speech once informed of the linguistic origin of the signal. Combinations of sine-wave speech and incongruent video of the talker elicited a McGurk effect only for informed observers. This indicates that the audiovisual integration effect is specific to speech perception. However, observers...... that observers did look near the mouth. We conclude that eye-movements did not influence the results of Tuomainen et al. and that their results thus can be taken as evidence of a speech specific mode of audiovisual integration underlying the McGurk illusion....

  7. Speech and Debate as Civic Education

    Science.gov (United States)

    Hogan, J. Michael; Kurr, Jeffrey A.; Johnson, Jeremy D.; Bergmaier, Michael J.

    2016-01-01

    In light of the U.S. Senate's designation of March 15, 2016 as "National Speech and Debate Education Day" (S. Res. 398, 2016), it only seems fitting that "Communication Education" devote a special section to the role of speech and debate in civic education. Speech and debate have been at the heart of the communication…

  8. Tuning Neural Phase Entrainment to Speech.

    Science.gov (United States)

    Falk, Simone; Lanzilotti, Cosima; Schön, Daniele

    2017-08-01

    Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.

  9. Speech perception as an active cognitive process

    Directory of Open Access Journals (Sweden)

    Shannon eHeald

    2014-03-01

    Full Text Available One view of speech perception is that acoustic signals are transformed into representations for pattern matching to determine linguistic structure. This process can be taken as a statistical pattern-matching problem, assuming realtively stable linguistic categories are characterized by neural representations related to auditory properties of speech that can be compared to speech input. This kind of pattern matching can be termed a passive process which implies rigidity of processingd with few demands on cognitive processing. An alternative view is that speech recognition, even in early stages, is an active process in which speech analysis is attentionally guided. Note that this does not mean consciously guided but that information-contingent changes in early auditory encoding can occur as a function of context and experience. Active processing assumes that attention, plasticity, and listening goals are important in considering how listeners cope with adverse circumstances that impair hearing by masking noise in the environment or hearing loss. Although theories of speech perception have begun to incorporate some active processing, they seldom treat early speech encoding as plastic and attentionally guided. Recent research has suggested that speech perception is the product of both feedforward and feedback interactions between a number of brain regions that include descending projections perhaps as far downstream as the cochlea. It is important to understand how the ambiguity of the speech signal and constraints of context dynamically determine cognitive resources recruited during perception including focused attention, learning, and working memory. Theories of speech perception need to go beyond the current corticocentric approach in order to account for the intrinsic dynamics of the auditory encoding of speech. In doing so, this may provide new insights into ways in which hearing disorders and loss may be treated either through augementation or

  10. Audiovisual Speech Synchrony Measure: Application to Biometrics

    Directory of Open Access Journals (Sweden)

    Gérard Chollet

    2007-01-01

    Full Text Available Speech is a means of communication which is intrinsically bimodal: the audio signal originates from the dynamics of the articulators. This paper reviews recent works in the field of audiovisual speech, and more specifically techniques developed to measure the level of correspondence between audio and visual speech. It overviews the most common audio and visual speech front-end processing, transformations performed on audio, visual, or joint audiovisual feature spaces, and the actual measure of correspondence between audio and visual speech. Finally, the use of synchrony measure for biometric identity verification based on talking faces is experimented on the BANCA database.

  11. The motor theory of speech perception revisited.

    Science.gov (United States)

    Massaro, Dominic W; Chen, Trevor H

    2008-04-01

    Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.

  12. Commercial speech in crisis: Crisis Pregnancy Center regulations and definitions of commercial speech.

    Science.gov (United States)

    Gilbert, Kathryn E

    2013-02-01

    Recent attempts to regulate Crisis Pregnancy Centers, pseudoclinics that surreptitiously aim to dissuade pregnant women from choosing abortion, have confronted the thorny problem of how to define commercial speech. The Supreme Court has offered three potential answers to this definitional quandary. This Note uses the Crisis Pregnancy Center cases to demonstrate that courts should use one of these solutions, the factor-based approach of Bolger v. Youngs Drugs Products Corp., to define commercial speech in the Crisis Pregnancy Center cases and elsewhere. In principle and in application, the Bolger factor-based approach succeeds in structuring commercial speech analysis at the margins of the doctrine.

  13. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  14. LIBERDADE DE EXPRESSÃO E DISCURSO DO ÓDIO NO BRASIL / FREE SPEECH AND HATE SPEECH IN BRAZIL

    Directory of Open Access Journals (Sweden)

    Nevita Maria Pessoa de Aquino Franca Luna

    2014-12-01

    Full Text Available The purpose of this article is to analyze the restriction of free speech when it comes close to hate speech. In this perspective, the aim of this study is to answer the question: what is the understanding adopted by the Brazilian Supreme Court in cases involving the conflict between free speech and hate speech? The methodology combines a bibliographic review on the theoretical assumptions of the research (concept of free speech and hate speech, and understanding of the rights of defense of traditionally discriminated minorities and empirical research (documental and jurisprudential analysis of judged cases of American Court, German Court and Brazilian Court. Firstly, free speech is discussed, defining its meaning, content and purpose. Then, the hate speech is pointed as an inhibitor element of free speech for offending members of traditionally discriminated minorities, who are outnumbered or in a situation of cultural, socioeconomic or political subordination. Subsequently, are discussed some aspects of American (negative freedom and German models (positive freedom, to demonstrate that different cultures adopt different legal solutions. At the end, it is concluded that there is an approximation of the Brazilian understanding with the German doctrine, from the analysis of landmark cases as the publisher Siegfried Ellwanger (2003 and the Samba School Unidos do Viradouro (2008. The Brazilian comprehension, a multicultural country made up of different ethnicities, leads to a new process of defending minorities who, despite of involving the collision of fundamental rights (dignity, equality and freedom, is still restrained by incompatible barriers of a contemporary pluralistic democracy.

  15. Speech production in amplitude-modulated noise

    DEFF Research Database (Denmark)

    Macdonald, Ewen N; Raufer, Stefan

    2013-01-01

    The Lombard effect refers to the phenomenon where talkers automatically increase their level of speech in a noisy environment. While many studies have characterized how the Lombard effect influences different measures of speech production (e.g., F0, spectral tilt, etc.), few have investigated...... the consequences of temporally fluctuating noise. In the present study, 20 talkers produced speech in a variety of noise conditions, including both steady-state and amplitude-modulated white noise. While listening to noise over headphones, talkers produced randomly generated five word sentences. Similar...... of noisy environments and will alter their speech accordingly....

  16. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Directory of Open Access Journals (Sweden)

    Eduardo Coutinho

    Full Text Available Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech and cross-domain experiments (i.e., models trained in one modality and tested on the other. In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  17. Free Speech Yearbook 1980.

    Science.gov (United States)

    Kane, Peter E., Ed.

    The 11 articles in this collection deal with theoretical and practical freedom of speech issues. The topics covered are (1) the United States Supreme Court and communication theory; (2) truth, knowledge, and a democratic respect for diversity; (3) denial of freedom of speech in Jock Yablonski's campaign for the presidency of the United Mine…

  18. Facial Speech Gestures: The Relation between Visual Speech Processing, Phonological Awareness, and Developmental Dyslexia in 10-Year-Olds

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Friederici, Angela D.

    2016-01-01

    Successful communication in everyday life crucially involves the processing of auditory and visual components of speech. Viewing our interlocutor and processing visual components of speech facilitates speech processing by triggering auditory processing. Auditory phoneme processing, analyzed by event-related brain potentials (ERP), has been shown…

  19. Speech enhancement on smartphone voice recording

    International Nuclear Information System (INIS)

    Atmaja, Bagus Tris; Farid, Mifta Nur; Arifianto, Dhany

    2016-01-01

    Speech enhancement is challenging task in audio signal processing to enhance the quality of targeted speech signal while suppress other noises. In the beginning, the speech enhancement algorithm growth rapidly from spectral subtraction, Wiener filtering, spectral amplitude MMSE estimator to Non-negative Matrix Factorization (NMF). Smartphone as revolutionary device now is being used in all aspect of life including journalism; personally and professionally. Although many smartphones have two microphones (main and rear) the only main microphone is widely used for voice recording. This is why the NMF algorithm widely used for this purpose of speech enhancement. This paper evaluate speech enhancement on smartphone voice recording by using some algorithms mentioned previously. We also extend the NMF algorithm to Kulback-Leibler NMF with supervised separation. The last algorithm shows improved result compared to others by spectrogram and PESQ score evaluation. (paper)

  20. Hearing speech in music.

    Science.gov (United States)

    Ekström, Seth-Reino; Borg, Erik

    2011-01-01

    The masking effect of a piano composition, played at different speeds and in different octaves, on speech-perception thresholds was investigated in 15 normal-hearing and 14 moderately-hearing-impaired subjects. Running speech (just follow conversation, JFC) testing and use of hearing aids increased the everyday validity of the findings. A comparison was made with standard audiometric noises [International Collegium of Rehabilitative Audiology (ICRA) noise and speech spectrum-filtered noise (SPN)]. All masking sounds, music or noise, were presented at the same equivalent sound level (50 dBA). The results showed a significant effect of piano performance speed and octave (Ptempo had the largest effect; and high octave and slow tempo, the smallest. Music had a lower masking effect than did ICRA noise with two or six speakers at normal vocal effort (Pmusic offers an interesting opportunity for studying masking under realistic conditions, where spectral and temporal features can be varied independently. The results have implications for composing music with vocal parts, designing acoustic environments and creating a balance between speech perception and privacy in social settings.

  1. Relationship Between Mandarin Speech Reception Thresholds and Pure-tone Thresholds in the Geriatric Population

    Directory of Open Access Journals (Sweden)

    Chih-Hung Chien

    2006-01-01

    Conclusion: This study established the agreement between Mandarin SRTs and PTTs in the low tone area of speech frequencies in the geriatric population. In clinical settings, SRT test can be rapidly and easily performed and is relatively inexpensive. It is a vital indicator of the accuracy of PTT measurement.

  2. Establishment of Human Neural Progenitor Cells from Human Induced Pluripotent Stem Cells with Diverse Tissue Origins

    Directory of Open Access Journals (Sweden)

    Hayato Fukusumi

    2016-01-01

    Full Text Available Human neural progenitor cells (hNPCs have previously been generated from limited numbers of human induced pluripotent stem cell (hiPSC clones. Here, 21 hiPSC clones derived from human dermal fibroblasts, cord blood cells, and peripheral blood mononuclear cells were differentiated using two neural induction methods, an embryoid body (EB formation-based method and an EB formation method using dual SMAD inhibitors (dSMADi. Our results showed that expandable hNPCs could be generated from hiPSC clones with diverse somatic tissue origins. The established hNPCs exhibited a mid/hindbrain-type neural identity and uniform expression of neural progenitor genes.

  3. Segmental intelligibility of synthetic speech produced by rule.

    Science.gov (United States)

    Logan, J S; Greene, B G; Pisoni, D B

    1989-08-01

    This paper reports the results of an investigation that employed the modified rhyme test (MRT) to measure the segmental intelligibility of synthetic speech generated automatically by rule. Synthetic speech produced by ten text-to-speech systems was studied and compared to natural speech. A variation of the standard MRT was also used to study the effects of response set size on perceptual confusions. Results indicated that the segmental intelligibility scores formed a continuum. Several systems displayed very high levels of performance that were close to or equal to scores obtained with natural speech; other systems displayed substantially worse performance compared to natural speech. The overall performance of the best system, DECtalk--Paul, was equivalent to the data obtained with natural speech for consonants in syllable-initial position. The findings from this study are discussed in terms of the use of a set of standardized procedures for measuring intelligibility of synthetic speech under controlled laboratory conditions. Recent work investigating the perception of synthetic speech under more severe conditions in which greater demands are made on the listener's processing resources is also considered. The wide range of intelligibility scores obtained in the present study demonstrates important differences in perception and suggests that not all synthetic speech is perceptually equivalent to the listener.

  4. Segmental intelligibility of synthetic speech produced by rule

    Science.gov (United States)

    Logan, John S.; Greene, Beth G.; Pisoni, David B.

    2012-01-01

    This paper reports the results of an investigation that employed the modified rhyme test (MRT) to measure the segmental intelligibility of synthetic speech generated automatically by rule. Synthetic speech produced by ten text-to-speech systems was studied and compared to natural speech. A variation of the standard MRT was also used to study the effects of response set size on perceptual confusions. Results indicated that the segmental intelligibility scores formed a continuum. Several systems displayed very high levels of performance that were close to or equal to scores obtained with natural speech; other systems displayed substantially worse performance compared to natural speech. The overall performance of the best system, DECtalk—Paul, was equivalent to the data obtained with natural speech for consonants in syllable-initial position. The findings from this study are discussed in terms of the use of a set of standardized procedures for measuring intelligibility of synthetic speech under controlled laboratory conditions. Recent work investigating the perception of synthetic speech under more severe conditions in which greater demands are made on the listener’s processing resources is also considered. The wide range of intelligibility scores obtained in the present study demonstrates important differences in perception and suggests that not all synthetic speech is perceptually equivalent to the listener. PMID:2527884

  5. Empathy, Ways of Knowing, and Interdependence as Mediators of Gender Differences in Attitudes toward Hate Speech and Freedom of Speech

    Science.gov (United States)

    Cowan, Gloria; Khatchadourian, Desiree

    2003-01-01

    Women are more intolerant of hate speech than men. This study examined relationality measures as mediators of gender differences in the perception of the harm of hate speech and the importance of freedom of speech. Participants were 107 male and 123 female college students. Questionnaires assessed the perceived harm of hate speech, the importance…

  6. Speech enhancement theory and practice

    CERN Document Server

    Loizou, Philipos C

    2013-01-01

    With the proliferation of mobile devices and hearing devices, including hearing aids and cochlear implants, there is a growing and pressing need to design algorithms that can improve speech intelligibility without sacrificing quality. Responding to this need, Speech Enhancement: Theory and Practice, Second Edition introduces readers to the basic problems of speech enhancement and the various algorithms proposed to solve these problems. Updated and expanded, this second edition of the bestselling textbook broadens its scope to include evaluation measures and enhancement algorithms aimed at impr

  7. Recognizing emotional speech in Persian: a validated database of Persian emotional speech (Persian ESD).

    Science.gov (United States)

    Keshtiari, Niloofar; Kuhlmann, Michael; Eslami, Moharram; Klann-Delius, Gisela

    2015-03-01

    Research on emotional speech often requires valid stimuli for assessing perceived emotion through prosody and lexical content. To date, no comprehensive emotional speech database for Persian is officially available. The present article reports the process of designing, compiling, and evaluating a comprehensive emotional speech database for colloquial Persian. The database contains a set of 90 validated novel Persian sentences classified in five basic emotional categories (anger, disgust, fear, happiness, and sadness), as well as a neutral category. These sentences were validated in two experiments by a group of 1,126 native Persian speakers. The sentences were articulated by two native Persian speakers (one male, one female) in three conditions: (1) congruent (emotional lexical content articulated in a congruent emotional voice), (2) incongruent (neutral sentences articulated in an emotional voice), and (3) baseline (all emotional and neutral sentences articulated in neutral voice). The speech materials comprise about 470 sentences. The validity of the database was evaluated by a group of 34 native speakers in a perception test. Utterances recognized better than five times chance performance (71.4 %) were regarded as valid portrayals of the target emotions. Acoustic analysis of the valid emotional utterances revealed differences in pitch, intensity, and duration, attributes that may help listeners to correctly classify the intended emotion. The database is designed to be used as a reliable material source (for both text and speech) in future cross-cultural or cross-linguistic studies of emotional speech, and it is available for academic research purposes free of charge. To access the database, please contact the first author.

  8. Imitation and speech: commonalities within Broca's area.

    Science.gov (United States)

    Kühn, Simone; Brass, Marcel; Gallinat, Jürgen

    2013-11-01

    The so-called embodiment of communication has attracted considerable interest. Recently a growing number of studies have proposed a link between Broca's area's involvement in action processing and its involvement in speech. The present quantitative meta-analysis set out to test whether neuroimaging studies on imitation and overt speech show overlap within inferior frontal gyrus. By means of activation likelihood estimation (ALE), we investigated concurrence of brain regions activated by object-free hand imitation studies as well as overt speech studies including simple syllable and more complex word production. We found direct overlap between imitation and speech in bilateral pars opercularis (BA 44) within Broca's area. Subtraction analyses revealed no unique localization neither for speech nor for imitation. To verify the potential of ALE subtraction analysis to detect unique involvement within Broca's area, we contrasted the results of a meta-analysis on motor inhibition and imitation and found separable regions involved for imitation. This is the first meta-analysis to compare the neural correlates of imitation and overt speech. The results are in line with the proposed evolutionary roots of speech in imitation.

  9. Design and realisation of an audiovisual speech activity detector

    NARCIS (Netherlands)

    Van Bree, K.C.

    2006-01-01

    For many speech telecommunication technologies a robust speech activity detector is important. An audio-only speech detector will givefalse positives when the interfering signal is speech or has speech characteristics. The modality video is suitable to solve this problem. In this report the approach

  10. Utility of TMS to understand the neurobiology of speech

    Directory of Open Access Journals (Sweden)

    Takenobu eMurakami

    2013-07-01

    Full Text Available According to a traditional view, speech perception and production are processed largely separately in sensory and motor brain areas. Recent psycholinguistic and neuroimaging studies provide novel evidence that the sensory and motor systems dynamically interact in speech processing, by demonstrating that speech perception and imitation share regional brain activations. However, the exact nature and mechanisms of these sensorimotor interactions are not completely understood yet.Transcranial magnetic stimulation (TMS has often been used in the cognitive neurosciences, including speech research, as a complementary technique to behavioral and neuroimaging studies. Here we provide an up-to-date review focusing on TMS studies that explored speech perception and imitation.Single-pulse TMS of the primary motor cortex (M1 demonstrated a speech specific and somatotopically specific increase of excitability of the M1 lip area during speech perception (listening to speech or lip reading. A paired-coil TMS approach showed increases in effective connectivity from brain regions that are involved in speech processing to the M1 lip area when listening to speech. TMS in virtual lesion mode applied to speech processing areas modulated performance of phonological recognition and imitation of perceived speech.In summary, TMS is an innovative tool to investigate processing of speech perception and imitation. TMS studies have provided strong evidence that the sensory system is critically involved in mapping sensory input onto motor output and that the motor system plays an important role in speech perception.

  11. LinguaTag: an Emotional Speech Analysis Application

    OpenAIRE

    Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros

    2008-01-01

    The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional...

  12. Correlational Analysis of Speech Intelligibility Tests and Metrics for Speech Transmission

    Science.gov (United States)

    2017-12-04

    sounds, are more prone to masking than the high-energy, wide-spectrum vowels. Such contaminated speech is still audible but not clear. Thus, speech...Science; 2012 June 12–14; Kuala Lumpur ( Malaysia ): New York (NY): IEEE; c2012. p. 676–682. Approved for public release; distribution is unlimited. 47...ARRABITO 1 UNIV OF COLORADO (PDF) K AREHART 1 NASA (PDF) J ALLEN 1 FOOD AND DRUG ADM-DEPT (PDF) OF HEALTH AND HUMAN SERVICES

  13. Impairments of speech fluency in Lewy body spectrum disorder.

    Science.gov (United States)

    Ash, Sharon; McMillan, Corey; Gross, Rachel G; Cook, Philip; Gunawardena, Delani; Morgan, Brianna; Boller, Ashley; Siderowf, Andrew; Grossman, Murray

    2012-03-01

    Few studies have examined connected speech in demented and non-demented patients with Parkinson's disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. Cognitive functions in Childhood Apraxia of Speech

    NARCIS (Netherlands)

    Nijland, L.; Terband, H.; Maassen, B.

    2015-01-01

    Purpose: Childhood Apraxia of Speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional

  15. Subjective Quality Measurement of Speech Its Evaluation, Estimation and Applications

    CERN Document Server

    Kondo, Kazuhiro

    2012-01-01

    It is becoming crucial to accurately estimate and monitor speech quality in various ambient environments to guarantee high quality speech communication. This practical hands-on book shows speech intelligibility measurement methods so that the readers can start measuring or estimating speech intelligibility of their own system. The book also introduces subjective and objective speech quality measures, and describes in detail speech intelligibility measurement methods. It introduces a diagnostic rhyme test which uses rhyming word-pairs, and includes: An investigation into the effect of word familiarity on speech intelligibility. Speech intelligibility measurement of localized speech in virtual 3-D acoustic space using the rhyme test. Estimation of speech intelligibility using objective measures, including the ITU standard PESQ measures, and automatic speech recognizers.

  16. Comparison of two speech privacy measurements, articulation index (AI) and speech privacy noise isolation class (NIC'), in open workplaces

    Science.gov (United States)

    Yoon, Heakyung C.; Loftness, Vivian

    2002-05-01

    Lack of speech privacy has been reported to be the main dissatisfaction among occupants in open workplaces, according to workplace surveys. Two speech privacy measurements, Articulation Index (AI), standardized by the American National Standards Institute in 1969, and Speech Privacy Noise Isolation Class (NIC', Noise Isolation Class Prime), adapted from Noise Isolation Class (NIC) by U. S. General Services Administration (GSA) in 1979, have been claimed as objective tools to measure speech privacy in open offices. To evaluate which of them, normal privacy for AI or satisfied privacy for NIC', is a better tool in terms of speech privacy in a dynamic open office environment, measurements were taken in the field. AIs and NIC's in the different partition heights and workplace configurations have been measured following ASTM E1130 (Standard Test Method for Objective Measurement of Speech Privacy in Open Offices Using Articulation Index) and GSA test PBS-C.1 (Method for the Direct Measurement of Speech-Privacy Potential (SPP) Based on Subjective Judgments) and PBS-C.2 (Public Building Service Standard Method of Test Method for the Sufficient Verification of Speech-Privacy Potential (SPP) Based on Objective Measurements Including Methods for the Rating of Functional Interzone Attenuation and NC-Background), respectively.

  17. SPEECH VISUALIZATION SISTEM AS A BASIS FOR SPEECH TRAINING AND COMMUNICATION AIDS

    Directory of Open Access Journals (Sweden)

    Oliana KRSTEVA

    1997-09-01

    Full Text Available One receives much more information through a visual sense than through a tactile one. However, most visual aids for hearing-impaired persons are not wearable because it is difficult to make them compact and it is not a best way to mask always their vision.Generally it is difficult to get the integrated patterns by a single mathematical transform of signals, such as a Foruier transform. In order to obtain the integrated pattern speech parameters should be carefully extracted by an analysis according as each parameter, and a visual pattern, which can intuitively be understood by anyone, must be synthesized from them. Successful integration of speech parameters will never disturb understanding of individual features, so that the system can be used for speech training and communication.

  18. SUSTAINABILITY IN THE BOWELS OF SPEECHES

    Directory of Open Access Journals (Sweden)

    Jadir Mauro Galvao

    2012-10-01

    Full Text Available The theme of sustainability has not yet achieved the feat of make up as an integral part the theoretical medley that brings out our most everyday actions, often visits some of our thoughts and permeates many of our speeches. The big event of 2012, the meeting gathered Rio +20 glances from all corners of the planet around that theme as burning, but we still see forward timidly. Although we have no very clear what the term sustainability closes it does not sound quite strange. Associate with things like ecology, planet, wastes emitted by smokestacks of factories, deforestation, recycling and global warming must be related, but our goal in this article is the least of clarifying the term conceptually and more try to observe as it appears in speeches of such conference. When the competent authorities talk about sustainability relate to what? We intend to investigate the lines and between the lines of these speeches, any assumptions associated with the term. Therefore we will analyze the speech of the People´s Summit, the opening speech of President Dilma and emblematic speech of the President of Uruguay, José Pepe Mujica.

  19. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    ) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... understanding speech when more than one person is talking, even when reduced audibility has been fully compensated for by a hearing aid. The reasons for these difficulties are not well understood. This presentation highlights recent concepts of the monaural and binaural signal processing strategies employed...... by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII...

  20. Parent-child interaction in motor speech therapy.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Jethava, Vibhuti; Pukonen, Margit; Huynh, Anna; Goshulak, Debra; Kroll, Robert; van Lieshout, Pascal

    2018-01-01

    This study measures the reliability and sensitivity of a modified Parent-Child Interaction Observation scale (PCIOs) used to monitor the quality of parent-child interaction. The scale is part of a home-training program employed with direct motor speech intervention for children with speech sound disorders. Eighty-four preschool age children with speech sound disorders were provided either high- (2×/week/10 weeks) or low-intensity (1×/week/10 weeks) motor speech intervention. Clinicians completed the PCIOs at the beginning, middle, and end of treatment. Inter-rater reliability (Kappa scores) was determined by an independent speech-language pathologist who assessed videotaped sessions at the midpoint of the treatment block. Intervention sensitivity of the scale was evaluated using a Friedman test for each item and then followed up with Wilcoxon pairwise comparisons where appropriate. We obtained fair-to-good inter-rater reliability (Kappa = 0.33-0.64) for the PCIOs using only video-based scoring. Child-related items were more strongly influenced by differences in treatment intensity than parent-related items, where a greater number of sessions positively influenced parent learning of treatment skills and child behaviors. The adapted PCIOs is reliable and sensitive to monitor the quality of parent-child interactions in a 10-week block of motor speech intervention with adjunct home therapy. Implications for rehabilitation Parent-centered therapy is considered a cost effective method of speech and language service delivery. However, parent-centered models may be difficult to implement for treatments such as developmental motor speech interventions that require a high degree of skill and training. For children with speech sound disorders and motor speech difficulties, a translated and adapted version of the parent-child observation scale was found to be sufficiently reliable and sensitive to assess changes in the quality of the parent-child interactions during

  1. Speech-enabled Computer-aided Translation

    DEFF Research Database (Denmark)

    Mesa-Lao, Bartolomé

    2014-01-01

    The present study has surveyed post-editor trainees’ views and attitudes before and after the introduction of speech technology as a front end to a computer-aided translation workbench. The aim of the survey was (i) to identify attitudes and perceptions among post-editor trainees before performing...... a post-editing task using automatic speech recognition (ASR); and (ii) to assess the degree to which post-editors’ attitudes and expectations to the use of speech technology changed after actually using it. The survey was based on two questionnaires: the first one administered before the participants...

  2. Comment on "Monkey vocal tracts are speech-ready".

    Science.gov (United States)

    Lieberman, Philip

    2017-07-01

    Monkey vocal tracts are capable of producing monkey speech, not the full range of articulate human speech. The evolution of human speech entailed both anatomy and brains. Fitch, de Boer, Mathur, and Ghazanfar in Science Advances claim that "monkey vocal tracts are speech-ready," and conclude that "…the evolution of human speech capabilities required neural change rather than modifications of vocal anatomy." Neither premise is consistent either with the data presented and the conclusions reached by de Boer and Fitch themselves in their own published papers on the role of anatomy in the evolution of human speech or with the body of independent studies published since the 1950s.

  3. Ultra low bit-rate speech coding

    CERN Document Server

    Ramasubramanian, V

    2015-01-01

    "Ultra Low Bit-Rate Speech Coding" focuses on the specialized topic of speech coding at very low bit-rates of 1 Kbits/sec and less, particularly at the lower ends of this range, down to 100 bps. The authors set forth the fundamental results and trends that form the basis for such ultra low bit-rates to be viable and provide a comprehensive overview of various techniques and systems in literature to date, with particular attention to their work in the paradigm of unit-selection based segment quantization. The book is for research students, academic faculty and researchers, and industry practitioners in the areas of speech processing and speech coding.

  4. Speech Motor Development in Childhood Apraxia of Speech : Generating Testable Hypotheses by Neurocomputational Modeling

    NARCIS (Netherlands)

    Terband, H.; Maassen, B.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  5. Speech motor development in childhood apraxia of speech: generating testable hypotheses by neurocomputational modeling.

    NARCIS (Netherlands)

    Terband, H.R.; Maassen, B.A.M.

    2010-01-01

    Childhood apraxia of speech (CAS) is a highly controversial clinical entity, with respect to both clinical signs and underlying neuromotor deficit. In the current paper, we advocate a modeling approach in which a computational neural model of speech acquisition and production is utilized in order to

  6. Between-Word Simplification Patterns in the Continuous Speech of Children with Speech Sound Disorders

    Science.gov (United States)

    Klein, Harriet B.; Liu-Shea, May

    2009-01-01

    Purpose: This study was designed to identify and describe between-word simplification patterns in the continuous speech of children with speech sound disorders. It was hypothesized that word combinations would reveal phonological changes that were unobserved with single words, possibly accounting for discrepancies between the intelligibility of…

  7. Effects of Synthetic Speech Output on Requesting and Natural Speech Production in Children with Autism: A Preliminary Study

    Science.gov (United States)

    Schlosser, Ralf W.; Sigafoos, Jeff; Luiselli, James K.; Angermeier, Katie; Harasymowyz, Ulana; Schooley, Katherine; Belfiore, Phil J.

    2007-01-01

    Requesting is often taught as an initial target during augmentative and alternative communication intervention in children with autism. Speech-generating devices are purported to have advantages over non-electronic systems due to their synthetic speech output. On the other hand, it has been argued that speech output, being in the auditory…

  8. Speech auditory brainstem response (speech ABR) characteristics depending on recording conditions, and hearing status: an experimental parametric study.

    Science.gov (United States)

    Akhoun, Idrick; Moulin, Annie; Jeanvoine, Arnaud; Ménard, Mikael; Buret, François; Vollaire, Christian; Scorretti, Riccardo; Veuillet, Evelyne; Berger-Vachon, Christian; Collet, Lionel; Thai-Van, Hung

    2008-11-15

    Speech elicited auditory brainstem responses (Speech ABR) have been shown to be an objective measurement of speech processing in the brainstem. Given the simultaneous stimulation and recording, and the similarities between the recording and the speech stimulus envelope, there is a great risk of artefactual recordings. This study sought to systematically investigate the source of artefactual contamination in Speech ABR response. In a first part, we measured the sound level thresholds over which artefactual responses were obtained, for different types of transducers and experimental setup parameters. A watermelon model was used to model the human head susceptibility to electromagnetic artefact. It was found that impedances between the electrodes had a great effect on electromagnetic susceptibility and that the most prominent artefact is due to the transducer's electromagnetic leakage. The only artefact-free condition was obtained with insert-earphones shielded in a Faraday cage linked to common ground. In a second part of the study, using the previously defined artefact-free condition, we recorded speech ABR in unilateral deaf subjects and bilateral normal hearing subjects. In an additional control condition, Speech ABR was recorded with the insert-earphones used to deliver the stimulation, unplugged from the ears, so that the subjects did not perceive the stimulus. No responses were obtained from the deaf ear of unilaterally hearing impaired subjects, nor in the insert-out-of-the-ear condition in all the subjects, showing that Speech ABR reflects the functioning of the auditory pathways.

  9. The selective role of premotor cortex in speech perception: a contribution to phoneme judgements but not speech comprehension.

    Science.gov (United States)

    Krieger-Redwood, Katya; Gaskell, M Gareth; Lindsay, Shane; Jefferies, Elizabeth

    2013-12-01

    Several accounts of speech perception propose that the areas involved in producing language are also involved in perceiving it. In line with this view, neuroimaging studies show activation of premotor cortex (PMC) during phoneme judgment tasks; however, there is debate about whether speech perception necessarily involves motor processes, across all task contexts, or whether the contribution of PMC is restricted to tasks requiring explicit phoneme awareness. Some aspects of speech processing, such as mapping sounds onto meaning, may proceed without the involvement of motor speech areas if PMC specifically contributes to the manipulation and categorical perception of phonemes. We applied TMS to three sites-PMC, posterior superior temporal gyrus, and occipital pole-and for the first time within the TMS literature, directly contrasted two speech perception tasks that required explicit phoneme decisions and mapping of speech sounds onto semantic categories, respectively. TMS to PMC disrupted explicit phonological judgments but not access to meaning for the same speech stimuli. TMS to two further sites confirmed that this pattern was site specific and did not reflect a generic difference in the susceptibility of our experimental tasks to TMS: stimulation of pSTG, a site involved in auditory processing, disrupted performance in both language tasks, whereas stimulation of occipital pole had no effect on performance in either task. These findings demonstrate that, although PMC is important for explicit phonological judgments, crucially, PMC is not necessary for mapping speech onto meanings.

  10. Mobile speech and advanced natural language solutions

    CERN Document Server

    Markowitz, Judith

    2013-01-01

    Mobile Speech and Advanced Natural Language Solutions provides a comprehensive and forward-looking treatment of natural speech in the mobile environment. This fourteen-chapter anthology brings together lead scientists from Apple, Google, IBM, AT&T, Yahoo! Research and other companies, along with academicians, technology developers and market analysts.  They analyze the growing markets for mobile speech, new methodological approaches to the study of natural language, empirical research findings on natural language and mobility, and future trends in mobile speech.  Mobile Speech opens with a challenge to the industry to broaden the discussion about speech in mobile environments beyond the smartphone, to consider natural language applications across different domains.   Among the new natural language methods introduced in this book are Sequence Package Analysis, which locates and extracts valuable opinion-related data buried in online postings; microintonation as a way to make TTS truly human-like; and se...

  11. Tissue

    Directory of Open Access Journals (Sweden)

    David Morrissey

    2012-01-01

    Full Text Available Purpose. In vivo gene therapy directed at tissues of mesenchymal origin could potentially augment healing. We aimed to assess the duration and magnitude of transene expression in vivo in mice and ex vivo in human tissues. Methods. Using bioluminescence imaging, plasmid and adenoviral vector-based transgene expression in murine quadriceps in vivo was examined. Temporal control was assessed using a doxycycline-inducible system. An ex vivo model was developed and optimised using murine tissue, and applied in ex vivo human tissue. Results. In vivo plasmid-based transgene expression did not silence in murine muscle, unlike in liver. Although maximum luciferase expression was higher in muscle with adenoviral delivery compared with plasmid, expression reduced over time. The inducible promoter cassette successfully regulated gene expression with maximum levels a factor of 11 greater than baseline. Expression was re-induced to a similar level on a temporal basis. Luciferase expression was readily detected ex vivo in human muscle and tendon. Conclusions. Plasmid constructs resulted in long-term in vivo gene expression in skeletal muscle, in a controllable fashion utilising an inducible promoter in combination with oral agents. Successful plasmid gene transfection in human ex vivo mesenchymal tissue was demonstrated for the first time.

  12. Monkey Lipsmacking Develops Like the Human Speech Rhythm

    Science.gov (United States)

    Morrill, Ryan J.; Paukner, Annika; Ferrari, Pier F.; Ghazanfar, Asif A.

    2012-01-01

    Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved "de novo" in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial…

  13. Understanding the Linguistic Characteristics of the Great Speeches

    OpenAIRE

    Mouritzen, Kristian

    2016-01-01

    This dissertation attempts to find the common traits of great speeches. It does so by closely examining the language of some of the most well-known speeches in world. These speeches are presented in the book Speeches that Changed the World (2006) by Simon Sebag Montefiore. The dissertation specifically looks at four variables: The beginnings and endings of the speeches, the use of passive voice, the use of personal pronouns and the difficulty of the language. These four variables are based on...

  14. Speech spectrum envelope modeling

    Czech Academy of Sciences Publication Activity Database

    Vích, Robert; Vondra, Martin

    Vol. 4775, - (2007), s. 129-137 ISSN 0302-9743. [COST Action 2102 International Workshop. Vietri sul Mare, 29.03.2007-31.03.2007] R&D Projects: GA AV ČR(CZ) 1ET301710509 Institutional research plan: CEZ:AV0Z20670512 Keywords : speech * speech processing * cepstral analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.302, year: 2005

  15. Speech emotion recognition methods: A literature review

    Science.gov (United States)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  16. Seeing the talker's face supports executive processing of speech in steady state noise.

    Science.gov (United States)

    Mishra, Sushmit; Lunner, Thomas; Stenfelt, Stefan; Rönnberg, Jerker; Rudner, Mary

    2013-01-01

    Listening to speech in noise depletes cognitive resources, affecting speech processing. The present study investigated how remaining resources or cognitive spare capacity (CSC) can be deployed by young adults with normal hearing. We administered a test of CSC (CSCT; Mishra et al., 2013) along with a battery of established cognitive tests to 20 participants with normal hearing. In the CSCT, lists of two-digit numbers were presented with and without visual cues in quiet, as well as in steady-state and speech-like noise at a high intelligibility level. In low load conditions, two numbers were recalled according to instructions inducing executive processing (updating, inhibition) and in high load conditions the participants were additionally instructed to recall one extra number, which was the always the first item in the list. In line with previous findings, results showed that CSC was sensitive to memory load and executive function but generally not related to working memory capacity (WMC). Furthermore, CSCT scores in quiet were lowered by visual cues, probably due to distraction. In steady-state noise, the presence of visual cues improved CSCT scores, probably by enabling better encoding. Contrary to our expectation, CSCT performance was disrupted more in steady-state than speech-like noise, although only without visual cues, possibly because selective attention could be used to ignore the speech-like background and provide an enriched representation of target items in working memory similar to that obtained in quiet. This interpretation is supported by a consistent association between CSCT scores and updating skills.

  17. Speech neglect: A strange educational blind spot

    Science.gov (United States)

    Harris, Katherine Safford

    2005-09-01

    Speaking is universally acknowledged as an important human talent, yet as a topic of educated common knowledge, it is peculiarly neglected. Partly, this is a consequence of the relatively recent growth of research on speech perception, production, and development, but also a function of the way that information is sliced up by undergraduate colleges. Although the basic acoustic mechanism of vowel production was known to Helmholtz, the ability to view speech production as a physiological event is evolving even now with such techniques as fMRI. Intensive research on speech perception emerged only in the early 1930s as Fletcher and the engineers at Bell Telephone Laboratories developed the transmission of speech over telephone lines. The study of speech development was revolutionized by the papers of Eimas and his colleagues on speech perception in infants in the 1970s. Dissemination of knowledge in these fields is the responsibility of no single academic discipline. It forms a center for two departments, Linguistics, and Speech and Hearing, but in the former, there is a heavy emphasis on other aspects of language than speech and, in the latter, a focus on clinical practice. For psychologists, it is a rather minor component of a very diverse assembly of topics. I will focus on these three fields in proposing possible remedies.

  18. Automatic Speech Recognition from Neural Signals: A Focused Review

    Directory of Open Access Journals (Sweden)

    Christian Herff

    2016-09-01

    Full Text Available Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e.~patients suffering from locked-in syndrome. For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people.This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography. As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the emph{Brain-to-text} system.

  19. An evaluation of speech production in two boys with neurodevelopmental disorders who received communication intervention with a speech-generating device.

    Science.gov (United States)

    Roche, Laura; Sigafoos, Jeff; Lancioni, Giulio E; O'Reilly, Mark F; Schlosser, Ralf W; Stevens, Michelle; van der Meer, Larah; Achmadi, Donna; Kagohara, Debora; James, Ruth; Carnett, Amarie; Hodis, Flaviu; Green, Vanessa A; Sutherland, Dean; Lang, Russell; Rispoli, Mandy; Machalicek, Wendy; Marschik, Peter B

    2014-11-01

    Children with neurodevelopmental disorders often present with little or no speech. Augmentative and alternative communication (AAC) aims to promote functional communication using non-speech modes, but it might also influence natural speech production. To investigate this possibility, we provided AAC intervention to two boys with neurodevelopmental disorders and severe communication impairment. Intervention focused on teaching the boys to use a tablet computer-based speech-generating device (SGD) to request preferred stimuli. During SGD intervention, both boys began to utter relevant single words. In an effort to induce more speech, and investigate the relation between SGD availability and natural speech production, the SGD was removed during some requesting opportunities. With intervention, both participants learned to use the SGD to request preferred stimuli. After learning to use the SGD, both participants began to respond more frequently with natural speech when the SGD was removed. The results suggest that a rehabilitation program involving initial SGD intervention, followed by subsequent withdrawal of the SGD, might increase the frequency of natural speech production in some children with neurodevelopmental disorders. This effect could be an example of response generalization. Copyright © 2014 ISDN. Published by Elsevier Ltd. All rights reserved.

  20. Multistage audiovisual integration of speech: dissociating identification and detection.

    Science.gov (United States)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  1. Developmental profile of speech-language and communicative functions in an individual with the preserved speech variant of Rett syndrome.

    Science.gov (United States)

    Marschik, Peter B; Vollmann, Ralf; Bartl-Pokorny, Katrin D; Green, Vanessa A; van der Meer, Larah; Wolin, Thomas; Einspieler, Christa

    2014-08-01

    We assessed various aspects of speech-language and communicative functions of an individual with the preserved speech variant of Rett syndrome (RTT) to describe her developmental profile over a period of 11 years. For this study, we incorporated the following data resources and methods to assess speech-language and communicative functions during pre-, peri- and post-regressional development: retrospective video analyses, medical history data, parental checklists and diaries, standardized tests on vocabulary and grammar, spontaneous speech samples and picture stories to elicit narrative competences. Despite achieving speech-language milestones, atypical behaviours were present at all times. We observed a unique developmental speech-language trajectory (including the RTT typical regression) affecting all linguistic and socio-communicative sub-domains in the receptive as well as the expressive modality. Future research should take into consideration a potentially considerable discordance between formal and functional language use by interpreting communicative acts on a more cautionary note.

  2. Separating Underdetermined Convolutive Speech Mixtures

    DEFF Research Database (Denmark)

    Pedersen, Michael Syskind; Wang, DeLiang; Larsen, Jan

    2006-01-01

    a method for underdetermined blind source separation of convolutive mixtures. The proposed framework is applicable for separation of instantaneous as well as convolutive speech mixtures. It is possible to iteratively extract each speech signal from the mixture by combining blind source separation...

  3. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  4. Speech-in-speech perception and executive function involvement.

    Directory of Open Access Journals (Sweden)

    Marcela Perrone-Bertolotti

    Full Text Available This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation. The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.

  5. Infants' brain responses to speech suggest analysis by synthesis.

    Science.gov (United States)

    Kuhl, Patricia K; Ramírez, Rey R; Bosseler, Alexis; Lin, Jo-Fu Lotus; Imada, Toshiaki

    2014-08-05

    Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.

  6. The interpersonal level in English: reported speech

    NARCIS (Netherlands)

    Keizer, E.

    2009-01-01

    The aim of this article is to describe and classify a number of different forms of English reported speech (or thought), and subsequently to analyze and represent them within the theory of FDG. First, the most prototypical forms of reported speech are discussed (direct and indirect speech);

  7. Cognitive Functions in Childhood Apraxia of Speech

    Science.gov (United States)

    Nijland, Lian; Terband, Hayo; Maassen, Ben

    2015-01-01

    Purpose: Childhood apraxia of speech (CAS) is diagnosed on the basis of specific speech characteristics, in the absence of problems in hearing, intelligence, and language comprehension. This does not preclude the possibility that children with this speech disorder might demonstrate additional problems. Method: Cognitive functions were investigated…

  8. Age-Related Differences in Speech Rate Perception Do Not Necessarily Entail Age-Related Differences in Speech Rate Use

    Science.gov (United States)

    Heffner, Christopher C.; Newman, Rochelle S.; Dilley, Laura C.; Idsardi, William J.

    2015-01-01

    Purpose: A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger…

  9. Preoperative mapping of speech-eloquent areas with functional magnetic resonance imaging (fMRI): comparison of different task designs

    International Nuclear Information System (INIS)

    Prothmann, S.; Zimmer, C.; Puccini, S.; Dalitz, B.; Kuehn, A.; Kahn, T.; Roedel, L.

    2005-01-01

    Purpose: Functional magnetic resonance imaging (fMRI) is a well-established, non-invasive method for pre-operative mapping of speech-eloquent areas. This investigation tests three simple paradigms to evaluate speech lateralisation and visualisation of speech-eloquent areas. Materials and Methods: 14 healthy volunteers and 16 brain tumour patients were given three tasks: to enumerate months in the correct order (EM), to generate verbs fitting to a given noun (GV) and to generate words fitting to a given alphabetic character (GW). We used a blocked design with 80 measurements which consisted of 4 intervals of speech activation alternating with relaxation periods. The data were analysed on the basis of the general linear model using Brainvoyager registered . The activated clusters in the inferior frontal (Broca) and the posterior temporal (Wernicke) cortex were analysed and the laterality indices calculated. Results: In both groups the paradigms GV and GW activated the Broca's area very robustly. Visualisation of the Wernicke's area was best achieved by the paradigm GV. The paradigm EM did not reliably stimulate either the frontal or the temporal cortex. Frontal lateralisation was best determined by GW and GV, temporal lateralisation by GV. Conclusion: The paradigms GV and GW visualise two essential aspects of speech processing: semantic word processing and word production. In a clinical setting with brain tumour patients, both, GV and GW can be used to visualise frontal and temporal speech areas, and to determine speech dominance. (orig.)

  10. Primary progressive aphasia and apraxia of speech.

    Science.gov (United States)

    Jung, Youngsin; Duffy, Joseph R; Josephs, Keith A

    2013-09-01

    Primary progressive aphasia is a neurodegenerative syndrome characterized by progressive language dysfunction. The majority of primary progressive aphasia cases can be classified into three subtypes: nonfluent/agrammatic, semantic, and logopenic variants. Each variant presents with unique clinical features, and is associated with distinctive underlying pathology and neuroimaging findings. Unlike primary progressive aphasia, apraxia of speech is a disorder that involves inaccurate production of sounds secondary to impaired planning or programming of speech movements. Primary progressive apraxia of speech is a neurodegenerative form of apraxia of speech, and it should be distinguished from primary progressive aphasia given its discrete clinicopathological presentation. Recently, there have been substantial advances in our understanding of these speech and language disorders. The clinical, neuroimaging, and histopathological features of primary progressive aphasia and apraxia of speech are reviewed in this article. The distinctions among these disorders for accurate diagnosis are increasingly important from a prognostic and therapeutic standpoint. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Optimizing acoustical conditions for speech intelligibility in classrooms

    Science.gov (United States)

    Yang, Wonyoung

    High speech intelligibility is imperative in classrooms where verbal communication is critical. However, the optimal acoustical conditions to achieve a high degree of speech intelligibility have previously been investigated with inconsistent results, and practical room-acoustical solutions to optimize the acoustical conditions for speech intelligibility have not been developed. This experimental study validated auralization for speech-intelligibility testing, investigated the optimal reverberation for speech intelligibility for both normal and hearing-impaired listeners using more realistic room-acoustical models, and proposed an optimal sound-control design for speech intelligibility based on the findings. The auralization technique was used to perform subjective speech-intelligibility tests. The validation study, comparing auralization results with those of real classroom speech-intelligibility tests, found that if the room to be auralized is not very absorptive or noisy, speech-intelligibility tests using auralization are valid. The speech-intelligibility tests were done in two different auralized sound fields---approximately diffuse and non-diffuse---using the Modified Rhyme Test and both normal and hearing-impaired listeners. A hybrid room-acoustical prediction program was used throughout the work, and it and a 1/8 scale-model classroom were used to evaluate the effects of ceiling barriers and reflectors. For both subject groups, in approximately diffuse sound fields, when the speech source was closer to the listener than the noise source, the optimal reverberation time was zero. When the noise source was closer to the listener than the speech source, the optimal reverberation time was 0.4 s (with another peak at 0.0 s) with relative output power levels of the speech and noise sources SNS = 5 dB, and 0.8 s with SNS = 0 dB. In non-diffuse sound fields, when the noise source was between the speaker and the listener, the optimal reverberation time was 0.6 s with

  12. Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D.; Senn, Pascal

    2013-01-01

    Objective To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280×720, 640×480, 320×240, 160×120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Results Higher frame rate (>7 fps), higher camera resolution (>640×480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion Webcameras have the potential to improve telecommunication of hearing-impaired individuals. PMID:23359119

  13. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  14. The benefit obtained from visually displayed text from an automatic speech recognizer during listening to speech presented in noise

    NARCIS (Netherlands)

    Zekveld, A.A.; Kramer, S.E.; Kessens, J.M.; Vlaming, M.S.M.G.; Houtgast, T.

    2008-01-01

    OBJECTIVES: The aim of this study was to evaluate the benefit that listeners obtain from visually presented output from an automatic speech recognition (ASR) system during listening to speech in noise. DESIGN: Auditory-alone and audiovisual speech reception thresholds (SRTs) were measured. The SRT

  15. The Hierarchical Cortical Organization of Human Speech Processing.

    Science.gov (United States)

    de Heer, Wendy A; Huth, Alexander G; Griffiths, Thomas L; Gallant, Jack L; Theunissen, Frédéric E

    2017-07-05

    Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech. SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to

  16. Inner Speech: Development, Cognitive Functions, Phenomenology, and Neurobiology

    Science.gov (United States)

    2015-01-01

    Inner speech—also known as covert speech or verbal thinking—has been implicated in theories of cognitive development, speech monitoring, executive function, and psychopathology. Despite a growing body of knowledge on its phenomenology, development, and function, approaches to the scientific study of inner speech have remained diffuse and largely unintegrated. This review examines prominent theoretical approaches to inner speech and methodological challenges in its study, before reviewing current evidence on inner speech in children and adults from both typical and atypical populations. We conclude by considering prospects for an integrated cognitive science of inner speech, and present a multicomponent model of the phenomenon informed by developmental, cognitive, and psycholinguistic considerations. Despite its variability among individuals and across the life span, inner speech appears to perform significant functions in human cognition, which in some cases reflect its developmental origins and its sharing of resources with other cognitive processes. PMID:26011789

  17. Multistage audiovisual integration of speech: dissociating identification and detection

    DEFF Research Database (Denmark)

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias

    2011-01-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech...... signal. Here we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers...... informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multi-stage account of audiovisual integration of speech in which the many attributes...

  18. Treating speech subsystems in childhood apraxia of speech with tactual input: the PROMPT approach.

    Science.gov (United States)

    Dale, Philip S; Hayden, Deborah A

    2013-11-01

    Prompts for Restructuring Oral Muscular Phonetic Targets (PROMPT; Hayden, 2004; Hayden, Eigen, Walker, & Olsen, 2010)-a treatment approach for the improvement of speech sound disorders in children-uses tactile-kinesthetic- proprioceptive (TKP) cues to support and shape movements of the oral articulators. No research to date has systematically examined the efficacy of PROMPT for children with childhood apraxia of speech (CAS). Four children (ages 3;6 [years;months] to 4;8), all meeting the American Speech-Language-Hearing Association (2007) criteria for CAS, were treated using PROMPT. All children received 8 weeks of 2 × per week treatment, including at least 4 weeks of full PROMPT treatment that included TKP cues. During the first 4 weeks, 2 of the 4 children received treatment that included all PROMPT components except TKP cues. This design permitted both between-subjects and within-subjects comparisons to evaluate the effect of TKP cues. Gains in treatment were measured by standardized tests and by criterion-referenced measures based on the production of untreated probe words, reflecting change in speech movements and auditory perceptual accuracy. All 4 children made significant gains during treatment, but measures of motor speech control and untreated word probes provided evidence for more gain when TKP cues were included. PROMPT as a whole appears to be effective for treating children with CAS, and the inclusion of TKP cues appears to facilitate greater effect.

  19. Interventions for Speech Sound Disorders in Children

    Science.gov (United States)

    Williams, A. Lynn, Ed.; McLeod, Sharynne, Ed.; McCauley, Rebecca J., Ed.

    2010-01-01

    With detailed discussion and invaluable video footage of 23 treatment interventions for speech sound disorders (SSDs) in children, this textbook and DVD set should be part of every speech-language pathologist's professional preparation. Focusing on children with functional or motor-based speech disorders from early childhood through the early…

  20. DEVELOPMENT AND DISORDERS OF SPEECH IN CHILDHOOD.

    Science.gov (United States)

    KARLIN, ISAAC W.; AND OTHERS

    THE GROWTH, DEVELOPMENT, AND ABNORMALITIES OF SPEECH IN CHILDHOOD ARE DESCRIBED IN THIS TEXT DESIGNED FOR PEDIATRICIANS, PSYCHOLOGISTS, EDUCATORS, MEDICAL STUDENTS, THERAPISTS, PATHOLOGISTS, AND PARENTS. THE NORMAL DEVELOPMENT OF SPEECH AND LANGUAGE IS DISCUSSED, INCLUDING THEORIES ON THE ORIGIN OF SPEECH IN MAN AND FACTORS INFLUENCING THE NORMAL…