WorldWideScience

Sample records for sign language prosodic

  1. Production and Comprehension of Prosodic Markers in Sign Language Imperatives

    Directory of Open Access Journals (Sweden)

    Diane Brentari

    2018-05-01

    Full Text Available In signed and spoken language sentences, imperative mood and the corresponding speech acts such as for instance, command, permission or advice, can be distinguished by morphosyntactic structures, but also solely by prosodic cues, which are the focus of this paper. These cues can express paralinguistic mental states or grammatical meaning, and we show that in American Sign Language (ASL, they also exhibit the function, scope, and alignment of prosodic, linguistic elements of sign languages. The production and comprehension of prosodic facial expressions and temporal patterns therefore can shed light on how cues are grammaticalized in sign languages. They can also be informative about the formal semantic and pragmatic properties of imperative types not only in ASL, but also more broadly. This paper includes three studies: one of production (Study 1 and two of comprehension (Studies 2 and 3. In Study 1, six prosodic cues are analyzed in production: temporal cues of sign and hold duration, and non-manual cues including tilts of the head, head nods, widening of the eyes, and presence of mouthings. Results of Study 1 show that neutral sentences and commands are well distinguished from each other and from other imperative speech acts via these prosodic cues alone; there is more limited differentiation among explanation, permission, and advice. The comprehension of these five speech acts is investigated in Deaf ASL signers in Study 2, and in three additional groups in Study 3: Deaf signers of German Sign Language (DGS, hearing non-signers from the United States, and hearing non-signers from Germany. Results of Studies 2 and 3 show that the ASL group performs significantly better than the other 3 groups and that all groups perform above chance for all meaning types in comprehension. Language-specific knowledge, therefore, has a significant effect on identifying imperatives based on targeted cues. Command has the most cues associated with it and is the

  2. Topics and topic prominence in two sign languages

    NARCIS (Netherlands)

    Kimmelman, V.

    2015-01-01

    In this paper we describe topic marking in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) and discuss whether these languages should be considered topic prominent. The formal markers of topics in RSL are sentence-initial position, a prosodic break following the topic, and

  3. The emergence of embedded structure: insights from Kafr Qasem Sign Language

    Science.gov (United States)

    Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana

    2014-01-01

    This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step—and the crucial step—toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means. PMID:24917837

  4. Prosodic Parallelism – comparing spoken and written language

    Directory of Open Access Journals (Sweden)

    Richard Wiese

    2016-10-01

    Full Text Available The Prosodic Parallelism hypothesis claims adjacent prosodic categories to prefer identical branching of internal adjacent constituents. According to Wiese and Speyer (2015, this preference implies feet contained in the same phonological phrase to display either binary or unary branching, but not different types of branching. The seemingly free schwa-zero alternations at the end of some words in German make it possible to test this hypothesis. The hypothesis was successfully tested by conducting a corpus study which used large-scale bodies of written German. As some open questions remain, and as it is unclear whether Prosodic Parallelism is valid for the spoken modality as well, the present study extends this inquiry to spoken German. As in the previous study, the results of a corpus analysis recruiting a variety of linguistic constructions are presented. The Prosodic Parallelism hypothesis can be demonstrated to be valid for spoken German as well as for written German. The paper thus contributes to the question whether prosodic preferences are similar between the spoken and written modes of a language. Some consequences of the results for the production of language are discussed.

  5. Kinematic differentiation of prosodic categories in normal and disordered language development.

    Science.gov (United States)

    Goffman, Lisa

    2004-10-01

    Prosody is complex and hierarchically organized but is realized as rhythmic movement sequences. Thus, observations of the development of rhythmic aspects of movement can provide insight into links between motor and language processes, specifically whether prosodic distinctions (e.g., feet and prosodic words) are instantiated in rhythmic movement output. This experiment examined 4-7-year-old children's (both normally developing and specifically language impaired) and adults' productions of prosodic sequences that were controlled for phonetic content but differed in morphosyntactic structure (i.e., content vs. function words). Primary analyses included kinematic measures of rhythmic structure (i.e., amplitude and duration of movements in weak vs. strong syllables) across content and function contexts. Findings showed that at the level of articulatory movement, adults produced distinct rhythmic categories across content and function word contexts, whereas children did not. Children with specific language impairment differed from normally developing peers only in their ability to produce well-organized and stable rhythmic movements, not in the differentiation of prosodic categories.

  6. The Phonetics of Head and Body Movement in the Realization of American Sign Language Signs.

    Science.gov (United States)

    Tyrone, Martha E; Mauk, Claude E

    2016-01-01

    Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.

  7. A Joint Prosodic Origin of Language and Music

    Directory of Open Access Journals (Sweden)

    Steven Brown

    2017-10-01

    Full Text Available Vocal theories of the origin of language rarely make a case for the precursor functions that underlay the evolution of speech. The vocal expression of emotion is unquestionably the best candidate for such a precursor, although most evolutionary models of both language and speech ignore emotion and prosody altogether. I present here a model for a joint prosodic precursor of language and music in which ritualized group-level vocalizations served as the ancestral state. This precursor combined not only affective and intonational aspects of prosody, but also holistic and combinatorial mechanisms of phrase generation. From this common stage, there was a bifurcation to form language and music as separate, though homologous, specializations. This separation of language and music was accompanied by their (reunification in songs with words.

  8. The "Globularization Hypothesis" of the Language-ready Brain as a Developmental Frame for Prosodic Bootstrapping Theories of Language Acquisition.

    Science.gov (United States)

    Irurtzun, Aritz

    2015-01-01

    In recent research (Boeckx and Benítez-Burraco, 2014a,b) have advanced the hypothesis that our species-specific language-ready brain should be understood as the outcome of developmental changes that occurred in our species after the split from Neanderthals-Denisovans, which resulted in a more globular braincase configuration in comparison to our closest relatives, who had elongated endocasts. According to these authors, the development of a globular brain is an essential ingredient for the language faculty and in particular, it is the centrality occupied by the thalamus in a globular brain that allows its modulatory or regulatory role, essential for syntactico-semantic computations. Their hypothesis is that the syntactico-semantic capacities arise in humans as a consequence of a process of globularization, which significantly takes place postnatally (cf. Neubauer et al., 2010). In this paper, I show that Boeckx and Benítez-Burraco's hypothesis makes an interesting developmental prediction regarding the path of language acquisition: it teases apart the onset of phonological acquisition and the onset of syntactic acquisition (the latter starting significantly later, after globularization). I argue that this hypothesis provides a developmental rationale for the prosodic bootstrapping hypothesis of language acquisition (cf. i.a. Gleitman and Wanner, 1982; Mehler et al., 1988, et seq.; Gervain and Werker, 2013), which claim that prosodic cues are employed for syntactic parsing. The literature converges in the observation that a large amount of such prosodic cues (in particular, rhythmic cues) are already acquired before the completion of the globularization phase, which paves the way for the premises of the prosodic bootstrapping hypothesis, allowing babies to have a rich knowledge of the prosody of their target language before they can start parsing the primary linguistic data syntactically.

  9. The globularization hypothesis of the language-ready brain as a developmental frame for prosodic bootstrapping theories of language acquisition

    Directory of Open Access Journals (Sweden)

    Aritz eIrurtzun

    2015-12-01

    Full Text Available In recent research Boeckx & Benítez-Burraco (2014a,b have advanced the hypothesis that our species-specific language-ready brain should be understood as the outcome of developmental changes that occurred in our species after the split from Neanderthals-Denisovans, which resulted in a more globular braincase configuration in comparison to our closest relatives, who had elongated endocasts. According to these authors, the development of a globular brain is an essential ingredient for the language faculty and in particular, it is the centrality occupied by the thalamus in a globular brain that allows its modulatory or regulatory role, essential for syntactico-semantic computations. Their hypothesis is that the syntactico-semantic capacities arise in humans as a consequence of a process of globularization, which significantly takes place postnatally (cf. Neubauer et al. (2010. In this paper, I show that Boeckx & Benítez-Burraco’s hypothesis makes an interesting developmental prediction regarding the path of language acquisition: it teases apart the onset of phonological acquisition and the onset of syntactic acquisition (the latter starting significantly later, after globularization. I argue that this hypothesis provides a developmental rationale for the prosodic bootstrapping hypothesis of language acquisition (cf. i.a. Gleitman & Wanner (1982; Mehler et al. (1988, et seq.; Gervain & Werker (2013, which claim that prosodic cues are employed for syntactic parsing. The literature converges in the observation that a large amount of such prosodic cues (in particular, rhythmic cues are already acquired before the completion of the globularization phase, which paves the way for the premises of prosodic bootstrapping hypothesis, allowing babies to have a rich knowledge of the prosody of their target language before they can start parsing the primary linguistic data syntactically.

  10. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  11. Word segmentation with universal prosodic cues.

    Science.gov (United States)

    Endress, Ansgar D; Hauser, Marc D

    2010-09-01

    When listening to speech from one's native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language. Copyright © 2010. Published by Elsevier Inc.

  12. Interactivity in prosodic representations in children.

    Science.gov (United States)

    Goffman, Lisa; Westover, Stefanie

    2013-11-01

    The aim of this study was to determine, using speech error and articulatory analyses, whether the binary distinction between iambs and trochees should be extended to include additional prosodic subcategories. Adults, children who are normally developing, and children with specific language impairment (SLI) participated. Children with SLI were included because they exhibit prosodic and motor deficits. Children, especially those with SLI, showed the expected increase in omission errors in weak initial syllables. Movement patterning analyses revealed that speakers produced differentiated articulatory templates beyond the broad categories of iamb and trochee. Finally, weak-weak prosodic sequences that crossed word boundaries showed increased articulatory variability when compared with strong-weak alternations. The binary distinction between iamb and trochee may be insufficient, with additional systematic prosodic subcategories evident, even in young children with SLI. Findings support increased interactivity in language processing.

  13. Minimal prosodic stems/words in Malawian Tonga: A Morpheme ...

    African Journals Online (AJOL)

    the level of prosodic stem analysis in this language (see Mtenje,. 2006; Mkochi ..... results into the generation of the surface minimal prosodic stems/words in. ciTonga. .... asymmetry. Natural Language and Linguistic Theory (2006) 24, 179-.

  14. Sign language comprehension: the case of Spanish sign language.

    Science.gov (United States)

    Rodríguez Ortiz, I R

    2008-01-01

    This study aims to answer the question, how much of Spanish Sign Language interpreting deaf individuals really understand. Study sampling included 36 deaf people (deafness ranging from severe to profound; variety depending on the age at which they learned sign language) and 36 hearing people who had good knowledge of sign language (most were interpreters). Sign language comprehension was assessed using passages of secondary level. After being exposed to the passages, the participants had to tell what they had understood about them, answer a set of related questions, and offer a title for the passage. Sign language comprehension by deaf participants was quite acceptable but not as good as that by hearing signers who, unlike deaf participants, were not only late learners of sign language as a second language but had also learned it through formal training.

  15. Sign language typology: The contribution of rural sign languages

    NARCIS (Netherlands)

    de Vos, C.; Pfau, R.

    2015-01-01

    Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western

  16. Prosodic Perception Problems in Spanish Dyslexia

    Science.gov (United States)

    Cuetos, Fernando; Martínez-García, Cristina; Suárez-Coalla, Paz

    2018-01-01

    The aim of this study was to investigate the prosody abilities on top of phonological and visual abilities in children with dyslexia in Spanish that can be considered a syllable-timed language. The performances on prosodic tasks (prosodic perception, rise-time perception), phonological tasks (phonological awareness, rapid naming, verbal working…

  17. Prosodic structure as a parallel to musical structure

    Directory of Open Access Journals (Sweden)

    Christopher Cullen Heffner

    2015-12-01

    Full Text Available What structural properties do language and music share? Although early speculation identified a wide variety of possibilities, the literature has largely focused on the parallels between musical structure and syntactic structure. Here, we argue that parallels between musical structure and prosodic structure deserve more attention. We review the evidence for a link between musical and prosodic structure and find it to be strong. In fact, certain elements of prosodic structure may provide a parsimonious comparison with musical structure without sacrificing empirical findings related to the parallels between language and music. We then develop several predictions related to such a hypothesis.

  18. Malaysian sign language dataset for automatic sign language ...

    African Journals Online (AJOL)

    Journal of Fundamental and Applied Sciences. Journal Home · ABOUT ... SL recognition system based on the Malaysian Sign Language (MSL). Implementation results are described. Keywords: sign language; pattern classification; database.

  19. Name signs in Danish Sign Language

    DEFF Research Database (Denmark)

    Bakken Jepsen, Julie

    2018-01-01

    in spoken languages, where a person working as a blacksmith by his friends might be referred to as ‘The Blacksmith’ (‘Here comes the Blacksmith!’) instead of using the person’s first name. Name signs are found not only in Danish Sign Language (DSL) but in most, if not all, sign languages studied to date....... This article provides examples of the creativity of the users of Danish Sign Language, including some of the processes in the use of metaphors, visual motivation and influence from Danish when name signs are created.......A name sign is a personal sign assigned to deaf, hearing impaired and hearing persons who enter the deaf community. The mouth action accompanying the sign reproduces all or part of the formal first name that the person has received by baptism or naming. Name signs can be compared to nicknames...

  20. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  1. Acoustic constituents of prosodic typology

    Science.gov (United States)

    Komatsu, Masahiko

    Different languages sound different, and considerable part of it derives from the typological difference of prosody. Although such difference is often referred to as lexical accent types (stress accent, pitch accent, and tone; e.g. English, Japanese, and Chinese respectively) and rhythm types (stress-, syllable-, and mora-timed rhythms; e.g. English, Spanish, and Japanese respectively), it is unclear whether these types are determined in terms of acoustic properties, The thesis intends to provide a potential basis for the description of prosody in terms of acoustics. It argues for the hypothesis that the source component of the source-filter model (acoustic features) approximately corresponds to prosody (linguistic features) through several experimental-phonetic studies. The study consists of four parts. (1) Preliminary experiment: Perceptual language identification tests were performed using English and Japanese speech samples whose frequency spectral information (i.e. non-source component) is heavily reduced. The results indicated that humans can discriminate languages with such signals. (2) Discussion on the linguistic information that the source component contains: This part constitutes the foundation of the argument of the thesis. Perception tests of consonants with the source signal indicated that the source component carries the information on broad categories of phonemes that contributes to the creation of rhythm. (3) Acoustic analysis: The speech samples of Chinese, English, Japanese, and Spanish, differing in prosodic types, were analyzed. These languages showed difference in acoustic characteristics of the source component. (4) Perceptual experiment: A language identification test for the above four languages was performed using the source signal with its acoustic features parameterized. It revealed that humans can discriminate prosodic types solely with the source features and that the discrimination is easier as acoustic information increases. The

  2. Information structure in Russian Sign Language and Sign Language of the Netherlands

    NARCIS (Netherlands)

    Kimmelman, V.

    2014-01-01

    This dissertation explores Information Structure in two sign languages: Sign Language of the Netherlands and Russian Sign Language. Based on corpus data and elicitation tasks we show how topic and focus are expressed in these languages. In particular, we show that topics can be marked syntactically

  3. Sign language perception research for improving automatic sign language recognition

    NARCIS (Netherlands)

    Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.

    2009-01-01

    Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of

  4. Prosodic cues to word order: what level of representation?

    Directory of Open Access Journals (Sweden)

    Carline eBernard

    2012-10-01

    Full Text Available Within language, systematic correlations exist between syntactic structure and prosody. Prosodic prominence, for instance, falls on the complement and not the head of syntactic phrases, and its realization depends on the phrasal position of the prominent element. Thus, in Japanese, a functor-final language, prominence is phrase-initial and realized as increased pitch (^Tōkyō ni ‘Tokyo to’, whereas in French, English or Italian, functor-initial languages, it manifests itself as phrase-final lengthening (to Rome. Prosody is readily available in the linguistic signal even to the youngest infants. It has, therefore, been proposed that young learners might be able to exploit its correlations with syntax to bootstrap language structure. In this study, we tested this hypothesis, investigating how 8-month-old monolingual French infants processed an artificial grammar manipulating the relative position of prosodic prominence and word frequency. In Condition 1, we created a speech stream in which the two cues, prosody and frequency, were aligned, frequent words being prosodically non-prominent and infrequent ones being prominent, as is the case in natural language (functors are prosodically minimal compared to content words. In Condition 2, the two cues were misaligned, with frequent words carrying prosodic prominence, unlike in natural language. After familiarization with the aligned or the misaligned stream in a headturn preference procedure, we tested infants’ preference for test items having a frequent word initial or a frequent word final word order. We found that infants’ familiarized with the aligned stream showed the expected preference for the frequent word initial test items, mimicking the functor-initial word order of French. Infants in the misaligned condition showed no preference. These results suggest that infants are able to use word frequency and prosody as early cues to word order and they integrate them into a coherent

  5. Palatalization and Intrinsic Prosodic Vowel Features in Russian

    Science.gov (United States)

    Ordin, Mikhail

    2011-01-01

    The presented study is aimed at investigating the interaction of palatalization and intrinsic prosodic features of the vowel in CVC (consonant+vowel+consonant) syllables in Russian. The universal nature of intrinsic prosodic vowel features was confirmed with the data from the Russian language. It was found that palatalization of the consonants…

  6. Planning Sign Languages: Promoting Hearing Hegemony? Conceptualizing Sign Language Standardization

    Science.gov (United States)

    Eichmann, Hanna

    2009-01-01

    In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…

  7. Signed Language Working Memory Capacity of Signed Language Interpreters and Deaf Signers

    Science.gov (United States)

    Wang, Jihong; Napier, Jemina

    2013-01-01

    This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…

  8. A combined prosodic and linguistic treatment approach for language-communication skills in children with autism spectrum disorders: A proof-of-concept study

    Directory of Open Access Journals (Sweden)

    Silva Kuschke

    2016-07-01

    Full Text Available This study aimed to determine whether the use of prosodically varied speech within a traditional language therapy framework had any effect on the listening skills, pragmatic skills and social interaction behaviour of three children with autism spectrum disorder (ASD. A single participant multiple baseline design across behaviours was implemented. Three participants with ASD were selected for this research. The listening skills, pragmatic skills and social interaction behaviour of the participants were compared before treatment, after a 3-week period of treatment and after a 2-week withdrawal period from treatment, utilising prosodically varied speech within a traditional language therapy approach. Statistical significance was not calculated for each individual due to the limited data, but visual inspection indicated that all the participants showed positive behavioural changes in performance across all areas after 3 weeks of treatment, independent of their pre-treatment performance level. The use of prosodically varied speech within a traditional language therapy framework appears to be a viable form of treatment for children with ASD.

  9. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  10. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  11. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  12. Early Sign Language Experience Goes along with an Increased Cross-Modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users

    Science.gov (United States)

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-01-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…

  13. Zebra finches are sensitive to prosodic features of human speech.

    Science.gov (United States)

    Spierings, Michelle J; ten Cate, Carel

    2014-07-22

    Variation in pitch, amplitude and rhythm adds crucial paralinguistic information to human speech. Such prosodic cues can reveal information about the meaning or emphasis of a sentence or the emotional state of the speaker. To examine the hypothesis that sensitivity to prosodic cues is language independent and not human specific, we tested prosody perception in a controlled experiment with zebra finches. Using a go/no-go procedure, subjects were trained to discriminate between speech syllables arranged in XYXY patterns with prosodic stress on the first syllable and XXYY patterns with prosodic stress on the final syllable. To systematically determine the salience of the various prosodic cues (pitch, duration and amplitude) to the zebra finches, they were subjected to five tests with different combinations of these cues. The zebra finches generalized the prosodic pattern to sequences that consisted of new syllables and used prosodic features over structural ones to discriminate between stimuli. This strong sensitivity to the prosodic pattern was maintained when only a single prosodic cue was available. The change in pitch was treated as more salient than changes in the other prosodic features. These results show that zebra finches are sensitive to the same prosodic cues known to affect human speech perception. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  14. Signed languages and globalization

    NARCIS (Netherlands)

    Hiddinga, A.; Crasborn, O.

    2011-01-01

    Deaf people who form part of a Deaf community communicate using a shared sign language. When meeting people from another language community, they can fall back on a flexible and highly context-dependent form of communication called international sign, in which shared elements from their own sign

  15. Brain response to prosodic boundary cues depends on boundary position

    Directory of Open Access Journals (Sweden)

    Julia eHolzgrefe

    2013-07-01

    Full Text Available Prosodic information is crucial for spoken language comprehension and especially for syntactic parsing, because prosodic cues guide the hearer’s syntactic analysis. The time course and mechanisms of this interplay of prosody and syntax are not yet well understood. In particular, there is an ongoing debate whether local prosodic cues are taken into account automatically or whether they are processed in relation to the global prosodic context in which they appear. The present study explores whether the perception of a prosodic boundary is affected by its position within an utterance. In an event-related potential (ERP study we tested if the brain response evoked by the prosodic boundary differs when the boundary occurs early in a list of three names connected by conjunctions (i.e., after the first name as compared to later in the utterance (i.e., after the second name. A closure positive shift (CPS — marking the processing of a prosodic phrase boundary — was elicited only for stimuli with a late boundary, but not for stimuli with an early boundary. This result is further evidence for an immediate integration of prosodic information into the parsing of an utterance. In addition, it shows that the processing of prosodic boundary cues depends on the previously processed information from the preceding prosodic context.

  16. Kinematic parameters of signed verbs.

    Science.gov (United States)

    Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina

    2013-10-01

    Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.

  17. Prosodic Focus Marking in Bai.

    NARCIS (Netherlands)

    Liu, Zenghui; Chen, A.; Van de Velde, Hans

    2014-01-01

    This study investigates prosodic marking of focus in Bai, a Sino-Tibetan language spoken in the Southwest of China, by adopting a semi-spontaneous experimental approach. Our data show that Bai speakers increase the duration of the focused constituent and reduce the duration of the post-focus

  18. Prosodic influences on speech production in children with specific language impairment and speech deficits: kinematic, acoustic, and transcription evidence.

    Science.gov (United States)

    Goffman, L

    1999-12-01

    It is often hypothesized that young children's difficulties with producing weak-strong (iambic) prosodic forms arise from perceptual or linguistically based production factors. A third possible contributor to errors in the iambic form may be biological constraints, or biases, of the motor system. In the present study, 7 children with specific language impairment (SLI) and speech deficits were matched to same age peers. Multiple levels of analysis, including kinematic (modulation and stability of movement), acoustic, and transcription, were applied to children's productions of iambic (weak-strong) and trochaic (strong-weak) prosodic forms. Findings suggest that a motor bias toward producing unmodulated rhythmic articulatory movements, similar to that observed in canonical babbling, contribute to children's acquisition of metrical forms. Children with SLI and speech deficits show less mature segmental and speech motor systems, as well as decreased modulation of movement in later developing iambic forms. Further, components of prosodic and segmental acquisition develop independently and at different rates.

  19. Sociolinguistic Typology and Sign Languages.

    Science.gov (United States)

    Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological 'complexity' and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological 'complexification'), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.

  20. Sociolinguistic Typology and Sign Languages

    Science.gov (United States)

    Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored. PMID:29515506

  1. Sociolinguistic Typology and Sign Languages

    Directory of Open Access Journals (Sweden)

    Adam Schembri

    2018-02-01

    Full Text Available This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011, applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’, the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005; in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.

  2. Standardization of Sign Languages

    Science.gov (United States)

    Adam, Robert

    2015-01-01

    Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…

  3. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  4. Prosodic Similarity Effects in Short-Term Memory in Developmental Dyslexia.

    Science.gov (United States)

    Goswami, Usha; Barnes, Lisa; Mead, Natasha; Power, Alan James; Leong, Victoria

    2016-11-01

    Children with developmental dyslexia are characterized by phonological difficulties across languages. Classically, this 'phonological deficit' in dyslexia has been investigated with tasks using single-syllable words. Recently, however, several studies have demonstrated difficulties in prosodic awareness in dyslexia. Potential prosodic effects in short-term memory have not yet been investigated. Here we create a new instrument based on three-syllable words that vary in stress patterns, to investigate whether prosodic similarity (the same prosodic pattern of stressed and unstressed syllables) exerts systematic effects on short-term memory. We study participants with dyslexia and age-matched and younger reading-level-matched typically developing controls. We find that all participants, including dyslexic participants, show prosodic similarity effects in short-term memory. All participants exhibited better retention of words that differed in prosodic structure, although participants with dyslexia recalled fewer words accurately overall compared to age-matched controls. Individual differences in prosodic memory were predicted by earlier vocabulary abilities, by earlier sensitivity to syllable stress and by earlier phonological awareness. To our knowledge, this is the first demonstration of prosodic similarity effects in short-term memory. The implications of a prosodic similarity effect for theories of lexical representation and of dyslexia are discussed. © 2016 The Authors. Dyslexia published by John Wiley & Sons Ltd. © 2016 The Authors. Dyslexia published by John Wiley & Sons Ltd.

  5. Sociolinguistic Typology and Sign Languages

    OpenAIRE

    Adam Schembri; Jordan Fenlon; Kearsy Cormier; Trevor Johnston

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflec...

  6. Adapting tests of sign language assessment for other sign languages--a review of linguistic, cultural, and psychometric problems.

    Science.gov (United States)

    Haug, Tobias; Mann, Wolfgang

    2008-01-01

    Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another.

  7. Numeral Incorporation in Japanese Sign Language

    Science.gov (United States)

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  8. The Legal Recognition of Sign Languages

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…

  9. Visual cortex entrains to sign language.

    Science.gov (United States)

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  10. Sign Lowering and Phonetic Reduction in American Sign Language.

    Science.gov (United States)

    Tyrone, Martha E; Mauk, Claude E

    2010-04-01

    This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.

  11. Language Policy and Planning: The Case of Italian Sign Language

    Science.gov (United States)

    Geraci, Carlo

    2012-01-01

    Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…

  12. What sign language creation teaches us about language.

    Science.gov (United States)

    Brentari, Diane; Coppola, Marie

    2013-03-01

    How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Awareness of Deaf Sign Language and Gang Signs.

    Science.gov (United States)

    Smith, Cynthia; Morgan, Robert L.

    There have been increasing incidents of innocent people who use American Sign Language (ASL) or another form of sign language being victimized by gang violence due to misinterpretation of ASL hand formations. ASL is familiar to learners with a variety of disabilities, particularly those in the deaf community. The problem is that gang members have…

  14. Adaptation of a Vocabulary Test from British Sign Language to American Sign Language

    Science.gov (United States)

    Mann, Wolfgang; Roy, Penny; Morgan, Gary

    2016-01-01

    This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…

  15. Kinship in Mongolian Sign Language

    Science.gov (United States)

    Geer, Leah

    2011-01-01

    Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…

  16. Automatic sign language recognition inspired by human sign perception

    NARCIS (Netherlands)

    Ten Holt, G.A.

    2010-01-01

    Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors,

  17. Approaching Sign Language Test Construction: Adaptation of the German Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…

  18. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  19. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  20. The Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    The entries of the The Danish Sign Language Dictionary have four sections:  Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons.  Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through:  Handshape: Particular handshapes for the active and the passive hand can be specified...... to find signs that are not themselves lemmas in the dictionary, but appear in example sentences.  Topic: Topics can be chosen as search criteria from a list of 70 topics....

  1. Issues in Sign Language Lexicography

    DEFF Research Database (Denmark)

    Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2013-01-01

    ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...

  2. The benefits of sign language for deaf learners with language challenges

    Directory of Open Access Journals (Sweden)

    Van Staden, Annalene

    2009-12-01

    Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.

  3. Early Sign Language Exposure and Cochlear Implantation Benefits.

    Science.gov (United States)

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  4. The integration of prosodic speech in high functioning autism: a preliminary FMRI study.

    Directory of Open Access Journals (Sweden)

    Isabelle Hesling

    2010-07-01

    Full Text Available Autism is a neurodevelopmental disorder characterized by a specific triad of symptoms such as abnormalities in social interaction, abnormalities in communication and restricted activities and interests. While verbal autistic subjects may present a correct mastery of the formal aspects of speech, they have difficulties in prosody (music of speech, leading to communication disorders. Few behavioural studies have revealed a prosodic impairment in children with autism, and among the few fMRI studies aiming at assessing the neural network involved in language, none has specifically studied prosodic speech. The aim of the present study was to characterize specific prosodic components such as linguistic prosody (intonation, rhythm and emphasis and emotional prosody and to correlate them with the neural network underlying them.We used a behavioural test (Profiling Elements of the Prosodic System, PEPS and fMRI to characterize prosodic deficits and investigate the neural network underlying prosodic processing. Results revealed the existence of a link between perceptive and productive prosodic deficits for some prosodic components (rhythm, emphasis and affect in HFA and also revealed that the neural network involved in prosodic speech perception exhibits abnormal activation in the left SMG as compared to controls (activation positively correlated with intonation and emphasis and an absence of deactivation patterns in regions involved in the default mode.These prosodic impairments could not only result from activation patterns abnormalities but also from an inability to adequately use the strategy of the default network inhibition, both mechanisms that have to be considered for decreasing task performance in High Functioning Autism.

  5. On the System of Person-Denoting Signs in Estonian Sign Language: Estonian Name Signs

    Science.gov (United States)

    Paales, Liina

    2010-01-01

    This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…

  6. Dictionaries of African Sign Languages: An Overview

    Science.gov (United States)

    Schmaling, Constanze H.

    2012-01-01

    This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…

  7. Repetitions in French Belgian Sign Language (LSFB) and Flemish Sign Language (VGT) narratives and conversations

    OpenAIRE

    Notarrigo, Ingrid; Meurant, Laurence; Van Herreweghe, Mieke; Vermeerbergen, Myriam

    2016-01-01

    Repetition was described in the nineties by a limited number of sign linguists: Vermeerbergen & De Vriendt (1994) looked at a small corpus of VGT data, Fisher & Janis (1990) analysed “verb sandwiches” in ASL and Pinsonneault (1994) “verb echos” in Quebec Sign Language. More recently the same phenomenon has been the focus of research in a growing number of signed languages, including American (Nunes and de Quadros 2008), Hong Kong (Sze 2008), Russian (Shamaro 2008), Polish (Flilipczak and Most...

  8. Signs of the arctic: Typological aspects of Inuit Sign Language

    NARCIS (Netherlands)

    Schuit, J.M.

    2014-01-01

    In this thesis, the native sign language used by deaf Inuit people is described. Inuit Sign Language (IUR) is used by less than 40 people as their sole means of communication, and is therefore highly endangered. Apart from the description of IUR as such, an additional goal is to contribute to the

  9. The sign language skills classroom observation: a process for describing sign language proficiency in classroom settings.

    Science.gov (United States)

    Reeves, J B; Newell, W; Holcomb, B R; Stinson, M

    2000-10-01

    In collaboration with teachers and students at the National Technical Institute for the Deaf (NTID), the Sign Language Skills Classroom Observation (SLSCO) was designed to provide feedback to teachers on their sign language communication skills in the classroom. In the present article, the impetus and rationale for development of the SLSCO is discussed. Previous studies related to classroom signing and observation methodology are reviewed. The procedure for developing the SLSCO is then described. This procedure included (a) interviews with faculty and students at NTID, (b) identification of linguistic features of sign language important for conveying content to deaf students, (c) development of forms for recording observations of classroom signing, (d) analysis of use of the forms, (e) development of a protocol for conducting the SLSCO, and (f) piloting of the SLSCO in classrooms. The results of use of the SLSCO with NTID faculty during a trial year are summarized.

  10. Effects of gender and regional dialect on prosodic patterns in American English

    Science.gov (United States)

    Clopper, Cynthia G.; Smiljanic, Rajka

    2011-01-01

    While cross-dialect prosodic variation has been well established for many languages, most variationist research on regional dialects of American English has focused on the vowel system. The current study was designed to explore prosodic variation in read speech in two regional varieties of American English: Southern and Midland. Prosodic dialect variation was analyzed in two domains: speaking rate and the phonetic expression of pitch movements associated with accented and phrase-final syllables. The results revealed significant effects of regional dialect on the distributions of pauses, pitch accents, and phrasal-boundary tone combinations. Significant effects of talker gender were also observed on the distributions of pitch accents and phrasal-boundary tone combinations. The findings from this study demonstrate that regional and gender identity features are encoded in part through prosody, and provide further motivation for the close examination of prosodic patterns across regional and social varieties of American English. PMID:21686317

  11. Development of a prosodic database for standard Arabic

    International Nuclear Information System (INIS)

    Chouireb, F.; Nail, M.; Dimeh, Y.; Guerti, M.

    2007-01-01

    The quality of a Text-To-Speech (Tts) synthesis system depends on the naturalness and the intelligibility of the generated speech. For this reason, a high quality automatic generator of prosody is necessary. Among the most recent developments in this field, there is a growing interest for machine learning techniques, such as neural networks, classification and regression trees, Hidden Markov Models (Hamm) and other stochastic methods. All these techniques are based on the analysis of phonetically and prosodic ally labeled speech corpora. The objective of our research is to realize a prosodic a speech database for Arabic such as those available for other languages (the database Timet for English and Bosons for French etc.). (author)

  12. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  13. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.

  14. Prosodic Awareness and Punctuation Ability in Adult Readers

    Science.gov (United States)

    Heggie, Lindsay; Wade-Woolley, Lesly

    2018-01-01

    We examined the relationship between two metalinguistic tasks: prosodic awareness and punctuation ability. Specifically, we investigated whether adults' ability to punctuate was related to the degree to which they are aware of and able to manipulate prosody in spoken language. English-speaking adult readers (n = 115) were administered a receptive…

  15. "Hearing" the signs:influence of sign language in an inclusive classroom

    OpenAIRE

    Monney, M. (Mariette)

    2017-01-01

    Abstract Finding new methods to achieve the goals of Education For All is a constant worry for primary school teachers. Multisensory methods have been proved to be efficient in the past decades. Sign Language, being a visual and kinesthetic language, could become a future educational tool to fulfill the needs of a growing diversity of learners. This ethnographic study describes how Sign Language exposure in inclusive classr...

  16. Prosodic Skills in Children with Down Syndrome and in Typically Developing Children

    Science.gov (United States)

    Zampini, Laura; Fasolo, Mirco; Spinelli, Maria; Zanchi, Paola; Suttora, Chiara; Salerni, Nicoletta

    2016-01-01

    Background: Many studies have analysed language development in children with Down syndrome to understand better the nature of their linguistic delays and the reason why these delays, particularly those in the morphosyntactic area, seem greater than their cognitive impairment. However, the prosodic characteristics of language development in…

  17. Beat gestures and prosodic prominence: impact on learning

    OpenAIRE

    Kushch, Olga

    2018-01-01

    Previous research has shown that gestures are beneficial for language learning. This doctoral thesis centers on the effects of beat gestures– i.e., hand and arm gestures that are typically associated with prosodically prominent positions in speech - on such processes. Little is known about how the two central properties of beat gestures, namely how they mark both information focus and rhythmic positions in speech, can be beneficial for learning either a first or a second language. The main go...

  18. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  19. Sign language for the information society: an ICT roadmap for South African Sign Language

    CSIR Research Space (South Africa)

    Olivrin, G

    2008-11-01

    Full Text Available of work made in SASL. There is currently no collection of the cultural and linguistic heritage of SASL. Public signage and localisation: Provision for SASL-specifi c sign names of places, people, companies and brands, as well as the localisation... upgrading the aging data and voice infrastructures for visual grade technologies, new usages of technologies will emerge in public signage and communications, in advertising and for visual languages such as SASL. Research and development in Sign Language...

  20. Syntactic priming in American Sign Language.

    Directory of Open Access Journals (Sweden)

    Matthew L Hall

    Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.

  1. New Methods for Prosodic Transcription: Capturing Variability as a Source of Information

    Directory of Open Access Journals (Sweden)

    Jennifer Cole

    2016-06-01

    Full Text Available Understanding the role of prosody in encoding linguistic meaning and in shaping phonetic form requires the analysis of prosodically annotated speech drawn from a wide variety of speech materials. Yet obtaining accurate and reliable prosodic annotations for even small datasets is challenging due to the time and expertise required. We discuss several factors that make prosodic annotation difficult and impact its reliability, all of which relate to 'variability': in the patterning of prosodic elements (features and structures as they relate to the linguistic and discourse context, in the acoustic cues for those prosodic elements, and in the parameter values of the cues. We propose two novel methods for prosodic transcription that capture variability as a source of information relevant to the linguistic analysis of prosody. The first is 'Rapid Prosody Transcription '(RPT, which can be performed by non-experts using a simple set of unary labels to mark prominence and boundaries based on immediate auditory impression. Inter-transcriber variability is used to calculate continuous-valued prosody ‘scores’ that are assigned to each word and represent the perceptual salience of its prosodic features or structure. RPT can be used to model the relative influence of top-down factors and acoustic cues in prosody perception, and to model prosodic variation across many dimensions, including language variety,speech style, or speaker’s affect. The second proposed method is the identification of individual cues to the contrastive prosodic elements of an utterance. Cue specification provides a link between the contrastive symbolic categories of prosodic structures and the continuous-valued parameters in the acoustic signal, and offers a framework for investigating how factors related to the grammatical and situational context influence the phonetic form of spoken words and phrases. While cue specification as a transcription tool has not yet been explored as

  2. Phrase Lengths and the Perceived Informativeness of Prosodic Cues in Turkish.

    Science.gov (United States)

    Dinçtopal Deniz, Nazik; Fodor, Janet Dean

    2017-12-01

    It is known from previous studies that in many cases (though not all) the prosodic properties of a spoken utterance reflect aspects of its syntactic structure, and also that in many cases (though not all) listeners can benefit from these prosodic cues. A novel contribution to this literature is the Rational Speaker Hypothesis (RSH), proposed by Clifton, Carlson and Frazier. The RSH maintains that listeners are sensitive to possible reasons for why a speaker might introduce a prosodic break: "listeners treat a prosodic boundary as more informative about the syntax when it flanks short constituents than when it flanks longer constituents," because in the latter case the speaker might have been motivated solely by consideration of optimal phrase lengths. This would effectively reduce the cue value of an appropriately placed prosodic boundary. We present additional evidence for the RSH from Turkish, a language typologically different from English. In addition, our study shows for the first time that the RSH also applies to a prosodic break which conflicts with the syntactic structure, reducing its perceived cue strength if it might have been motivated by length considerations. In this case, the RSH effect is beneficial. Finally, the Turkish data show that prosody-based explanations for parsing preferences such as the RSH do not take the place of traditional syntax-sensitive parsing strategies such as Late Closure. The two sources of guidance co-exist; both are used when available.

  3. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into

  4. Imitation, sign language skill and the Developmental Ease of Language Understanding (D-ELU model

    Directory of Open Access Journals (Sweden)

    Emil eHolmer

    2016-02-01

    Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills

  5. Language policies and sign language translation and interpreting: connections between Brazil and Mozambique

    Directory of Open Access Journals (Sweden)

    Silvana Aguiar dos Santos

    2015-12-01

    Full Text Available http://dx.doi.org/10.5007/1984-8420.2015v16n2p101 This paper is the result of an initial attempt to establish a connection between Brazil and Mozambique regarding sign language translation and interpreting. It reviews some important landmarks in language policies aimed at sign languages in these countries and discusses how certain actions directly impact political decisions related to sign lan­guage translation and interpreting. In this context, two lines of argument are developed. The first one addresses the role of sign language translation and interpreting in the Por­tuguese-speaking context, since Portuguese is the official language in both countries; the other offers some reflections about the Deaf movements and the movements of sign lan­guage translators and interpreters, the legal recognition of sign languages, the develop­ment of undergraduate courses and the contemporary challenges in the work of transla­tion professionals. Finally, it is suggested that sign language translators and interpreters in both Brazil and Mozambique undertake efforts to press government bodies to invest in: (i area-specific training for translators and interpreters, (ii qualification of the ser­vices provided by such professionals, and (iii development of human resources at mas­ter’s and doctoral levels in order to strengthen research on sign language translation and interpreting in the Community of Portuguese-Speaking Countries.

  6. Research Ethics in Sign Language Communities

    Science.gov (United States)

    Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.

    2009-01-01

    Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…

  7. The role of syllables in sign language production.

    Science.gov (United States)

    Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel

    2014-01-01

    The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.

  8. An electronic dictionary of Danish Sign Language

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2008-01-01

    Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....

  9. Opposite cerebral dominance for reading and sign language

    OpenAIRE

    Komakula, Sirisha. T.; Burr, Robert. B.; Lee, James N.; Anderson, Jeffrey

    2010-01-01

    We present a case of right hemispheric dominance for sign language but left hemispheric dominance for reading, in a left-handed deaf patient with epilepsy and left mesial temporal sclerosis. Atypical language laterality for ASL was determined by preoperative fMRI, and congruent with ASL modified WADA testing. We conclude that reading and sign language can have crossed dominance and preoperative fMRI evaluation of deaf patients should include both reading and sign language evaluations.

  10. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  11. Constraints on Negative Prefixation in Polish Sign Language.

    Science.gov (United States)

    Tomaszewski, Piotr

    2015-01-01

    The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.

  12. Phonological Awareness for American Sign Language

    Science.gov (United States)

    Corina, David P.; Hafer, Sarah; Welch, Kearnan

    2014-01-01

    This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…

  13. Prosodic differences between declaratives and interrogatives in infant-directed speech.

    Science.gov (United States)

    Geffen, Susan; Mintz, Toben H

    2017-07-01

    In many languages, declaratives and interrogatives differ in word order properties, and in syntactic organization more broadly. Thus, in order to learn the distinct syntactic properties of the two sentence types, learners must first be able to distinguish them using non-syntactic information. Prosodic information is often assumed to be a useful basis for this type of discrimination, although no systematic studies of the prosodic cues available to infants have been reported. Analysis of maternal speech in three Standard American English-speaking mother-infant dyads found that polar interrogatives differed from declaratives on the patterning of pitch and duration on the final two syllables, but wh-questions did not. Thus, while prosody is unlikely to aid discrimination of declaratives from wh-questions, infant-directed speech provides prosodic information that infants could use to distinguish declaratives and polar interrogatives. We discuss how learners could leverage this information to identify all question forms, in the context of syntax acquisition.

  14. Lexical access in sign language: a computational model.

    Science.gov (United States)

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  15. Lexical access in sign language: A computational model

    Directory of Open Access Journals (Sweden)

    Naomi Kenney Caselli

    2014-05-01

    Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  16. Indonesian Sign Language Number Recognition using SIFT Algorithm

    Science.gov (United States)

    Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.

    2018-04-01

    Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.

  17. The Influence of Deaf People's Dual Category Status on Sign Language Planning: The British Sign Language (Scotland) Act (2015)

    Science.gov (United States)

    De Meulder, Maartje

    2017-01-01

    Through the British Sign Language (Scotland) Act, British Sign Language (BSL) was given legal status in Scotland. The main motives for the Act were a desire to put BSL on a similar footing with Gaelic and the fact that in Scotland, BSL signers are the only group whose first language is not English who must rely on disability discrimination…

  18. Sociolinguistic Variation and Change in British Sign Language Number Signs: Evidence of Leveling?

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas

    2015-01-01

    This article presents findings from the first major study to investigate lexical variation and change in British Sign Language (BSL) number signs. As part of the BSL Corpus Project, number sign variants were elicited from 249 deaf signers from eight sites throughout the UK. Age, school location, and language background were found to be significant…

  19. A Stronger Reason for the Right to Sign Languages

    Science.gov (United States)

    Trovato, Sara

    2013-01-01

    Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…

  20. Relations between segmental and motor variability in prosodically complex nonword sequences.

    Science.gov (United States)

    Goffman, Lisa; Gerken, Louann; Lucchesi, Julie

    2007-04-01

    To assess how prosodic prominence and hierarchical foot structure influence segmental and articulatory aspects of speech production, specifically segmental accuracy and variability, and oral movement trajectory variability. Thirty individuals participated: 10 young adults, 10 children who are normally developing, and 10 children diagnosed with specific language impairment. Segmental error and segmental variability and movement trajectory variability were compared in low and high prosodic prominence conditions (i.e., strong and weak syllables) and in different prosodic foot structures. Between-participants findings were that both groups of children showed more segmental error and segmental variability and more movement trajectory variability than did adults. A similar within-participant pattern of results was observed for all 3 groups. Prosodic prominence influenced both segmental and motor levels of analysis, with weak syllables produced less accurately and with more lip and jaw movement trajectory variability than strong syllables. However, hierarchical foot structure affected segmental but not motor measures of speech production accuracy and variability. Motor and segmental variables were not consistently aligned. This pattern of results has clinical implications because inferences about motor variability may not directly follow from observations of segmental variability.

  1. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.

    Science.gov (United States)

    Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I

    2016-03-01

    One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.

    Science.gov (United States)

    Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary

    2016-01-01

    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.

  3. Phonological Similarity in American Sign Language.

    Science.gov (United States)

    Hildebrandt, Ursula; Corina, David

    2002-01-01

    Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)

  4. Examination of Sign Language Education According to the Opinions of Members from a Basic Sign Language Certification Program

    Science.gov (United States)

    Akmese, Pelin Pistav

    2016-01-01

    Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…

  5. Segmentation of British Sign Language (BSL): Mind the gap!

    NARCIS (Netherlands)

    Orfanidou, E.; McQueen, J.M.; Adam, R.; Morgan, G.

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous

  6. Charisma in business speeches -- A contrastive acoustic-prosodic analysis of Steve Jobs and Mark Zuckerberg

    OpenAIRE

    Niebuhr, Oliver; Brem, Alexander; Novák-Tót, Eszter; Voße, Jana

    2016-01-01

    Charisma is a key component of spoken language interaction; and it is probably for this reason that charismatic speech has been the subject of intensive research for centuries. However, what is still largely missing is a quantitative and objective line of research that, firstly, involves analyses of the acoustic-prosodic signal, secondly, focuses on business speeches like product presentations, and, thirdly, in doing so, advances the still fairly fragmentary evidence on the prosodic correlate...

  7. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  8. Understanding the Contributions of Prosodic Phonology to Morphological Development: Implications for Children with Specific Language Impairment

    Science.gov (United States)

    Demuth, Katherine; Tomas, Ekaterina

    2016-01-01

    A growing body of research with typically developing children has begun to show that the acquisition of grammatical morphemes interacts not only with a developing knowledge of syntax, but also with developing abilities at the interface with prosodic phonology. In particular, a Prosodic Licensing approach to these issues provides a framework for…

  9. On the System of Place Name Signs in Estonian Sign Language

    Directory of Open Access Journals (Sweden)

    Liina Paales

    2011-05-01

    Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.

  10. Validity of the American Sign Language Discrimination Test

    Science.gov (United States)

    Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.

    2016-01-01

    American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…

  11. The Mechanics of Fingerspelling: Analyzing Ethiopian Sign Language

    Science.gov (United States)

    Duarte, Kyle

    2010-01-01

    Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…

  12. Sign Languages of the World

    DEFF Research Database (Denmark)

    This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...

  13. Segmentation of British Sign Language (BSL): Mind the gap!

    OpenAIRE

    Orfanidou, E.; McQueen, J.; Adam, R.; Morgan, G.

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were prec...

  14. Equity in Education: Signed Language and the Courts

    Science.gov (United States)

    Snoddon, Kristin

    2009-01-01

    This article examines several legal cases in Canada, the USA, and Australia involving signed language in education for Deaf students. In all three contexts, signed language rights for Deaf students have been viewed from within a disability legislation framework that either does not extend to recognizing language rights in education or that…

  15. The Influence of Prosodic Stress Patterns and Semantic Depth on Novel Word Learning in Typically Developing Children.

    Science.gov (United States)

    Gladfelter, Allison; Goffman, Lisa

    2013-01-01

    The goal of this study was to investigate the effects of prosodic stress patterns and semantic depth on word learning. Twelve preschool-aged children with typically developing speech and language skills participated in a word learning task. Novel words with either a trochaic or iambic prosodic pattern were embedded in one of two learning conditions, either in children's stories (semantically rich) or picture matching games (semantically sparse). Three main analyses were used to measure word learning: comprehension and production probes, phonetic accuracy, and speech motor stability. Results revealed that prosodic frequency and density influence the learnability of novel words, or that there are prosodic neighborhood density effects. The impact of semantic depth on word learning was minimal and likely depends on the amount of experience with the novel words.

  16. A human mirror neuron system for language: Perspectives from signed languages of the deaf.

    Science.gov (United States)

    Knapp, Heather Patterson; Corina, David P

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.

  17. Cross-Linguistic Differences in Prosodic Cues to Syntactic Disambiguation in German and English

    Science.gov (United States)

    O'Brien, Mary Grantham; Jackson, Carrie N.; Gardner, Christine E.

    2014-01-01

    This study examined whether late-learning English-German second language (L2) learners and late-learning German-English L2 learners use prosodic cues to disambiguate temporarily ambiguous first language and L2 sentences during speech production. Experiments 1a and 1b showed that English-German L2 learners and German-English L2 learners used a…

  18. [Information technology in learning sign language].

    Science.gov (United States)

    Hernández, Cesar; Pulido, Jose L; Arias, Jorge E

    2015-01-01

    To develop a technological tool that improves the initial learning of sign language in hearing impaired children. The development of this research was conducted in three phases: the lifting of requirements, design and development of the proposed device, and validation and evaluation device. Through the use of information technology and with the advice of special education professionals, we were able to develop an electronic device that facilitates the learning of sign language in deaf children. This is formed mainly by a graphic touch screen, a voice synthesizer, and a voice recognition system. Validation was performed with the deaf children in the Filadelfia School of the city of Bogotá. A learning methodology was established that improves learning times through a small, portable, lightweight, and educational technological prototype. Tests showed the effectiveness of this prototype, achieving a 32 % reduction in the initial learning time for sign language in deaf children.

  19. A Closer Look at Formulaic Language: Prosodic Characteristics of Swedish Proverbs

    Science.gov (United States)

    Hallin, Anna Eva; Van Lancker Sidtis, Diana

    2017-01-01

    Formulaic expressions (such as idioms, proverbs, and conversational speech formulas) are currently a topic of interest. Examination of prosody in formulaic utterances, a less explored property of formulaic expressions, has yielded controversial views. The present study investigates prosodic characteristics of proverbs, as one type of formulaic…

  20. Historical Development of Hong Kong Sign Language

    Science.gov (United States)

    Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny

    2013-01-01

    This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…

  1. Prosodic constraints on inflected words: an area of difficulty for German-speaking children with specific language impairment?

    Science.gov (United States)

    Kauschke, Christina; Renner, Lena; Domahs, Ulrike

    2013-08-01

    Recent studies suggest that morphosyntactic difficulties may result from prosodic problems. We therefore address the interface between inflectional morphology and prosody in typically developing children (TD) and children with SLI by testing whether these groups are sensitive to prosodic constraints that guide plural formation in German. A plural elicitation task was designed consisting of 60 words and 20 pseudowords. The performance of 14 German-speaking children with SLI (mean age 7.5) was compared to age-matched controls and to younger children matched for productive vocabulary. TD children performed significantly better than children with SLI. Error analyses revealed that children with SLI produced more forms that did not meet the optimal shape of a noun plural. Beyond the fact that children with SLI have deficits in plural marking, the findings suggest that they also show reduced sensitivity to prosodic requirements. In other words, the prosodic structure of inflected words seems to be vulnerable in children with SLI.

  2. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  3. Sign Language with Babies: What Difference Does It Make?

    Science.gov (United States)

    Barnes, Susan Kubic

    2010-01-01

    Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…

  4. Brain correlates of constituent structure in sign language comprehension.

    Science.gov (United States)

    Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe

    2018-02-15

    During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Age-related changes to spectral voice characteristics affect judgments of prosodic, segmental, and talker attributes for child and adult speech

    Science.gov (United States)

    Dilley, Laura C.; Wieland, Elizabeth A.; Gamache, Jessica L.; McAuley, J. Devin; Redford, Melissa A.

    2013-01-01

    Purpose As children mature, changes in voice spectral characteristics covary with changes in speech, language, and behavior. Spectral characteristics were manipulated to alter the perceived ages of talkers’ voices while leaving critical acoustic-prosodic correlates intact, to determine whether perceived age differences were associated with differences in judgments of prosodic, segmental, and talker attributes. Method Speech was modified by lowering formants and fundamental frequency, for 5-year-old children’s utterances, or raising them, for adult caregivers’ utterances. Next, participants differing in awareness of the manipulation (Exp. 1a) or amount of speech-language training (Exp. 1b) made judgments of prosodic, segmental, and talker attributes. Exp. 2 investigated the effects of spectral modification on intelligibility. Finally, in Exp. 3 trained analysts used formal prosody coding to assess prosodic characteristics of spectrally-modified and unmodified speech. Results Differences in perceived age were associated with differences in ratings of speech rate, fluency, intelligibility, likeability, anxiety, cognitive impairment, and speech-language disorder/delay; effects of training and awareness of the manipulation on ratings were limited. There were no significant effects of the manipulation on intelligibility or formally coded prosody judgments. Conclusions Age-related voice characteristics can greatly affect judgments of speech and talker characteristics, raising cautionary notes for developmental research and clinical work. PMID:23275414

  6. Pauses in theatrical interpretation: delimitation of prosodic constituents

    Directory of Open Access Journals (Sweden)

    Lourenço Chacon

    2014-07-01

    Full Text Available We intend to observe the function of a linguistic resource – the pause – in theatrical interpretation. Connected to the field of speech therapy, we search for theoretical support in the Linguistics field, mainly in prosodic phonology – specifically, we highlight intonational phrase and phonological utterance, prosodic constituents –, proposing a dialogue between these fields, regarding the work with actors. In speech therapy literature, the work with actors focuses, centrally, in organic issues involved in the vocal process, such as “misuse” or “voice abuse”. To a smaller extent, we find, in this literature, researches that emphasize issues regarding interpretation and expressive resources, besides a few emphasizing the importance of linguistic resources in interpretation. Differently, in linguistics literature, the pause is approached, to a larger extent, from the phonetic perspective, related to several language levels. In this research, we analyzed audio recordings of four actors from a same theatrical group, acting the theatrical text Brutas flores, focused on these aims: (1 detect the place where pauses happen in the interpretation of a single text by four actors; (2 survey physical characteristics of length of these pauses; (3 check to what extent the length of a pause is related to the place where it happens, regarding the prosodic limits of intonational phrases (I and phonological utterance (U. We could observe that, although the interpretation is characterized by the subjectivity of the actor, the interpretation is constructed based in the possibilities offered by the prosodic organization of the text itself, being more or less flexible.We were also able to confirm, by considering the length of VVs units containing pauses, the prosodic hierarchy proposed by Nespor & Vogel, once the length of these units in U's limits was significantly higher than the length in I's limits. Thus, our results reinforce the premise that a

  7. Compiling a Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... – or are they homonyms?" and so on. Very often such questions demand further research and can't be answered sufficiently through a simple standard formula. Therefore lexicographic work often seems like an endless series of compromises. Another source of compromise arises when you set out to decide which information...... this dilemma, as we see DTS learners and teachers as well as native DTS signers as our target users. In the following we will focus on four problem areas with particular relevance for the sign language lexicographer: Sign representation Spoken languague equivalents and mouth movements Example sentences Partial...

  8. LSE-Sign: A lexical database for Spanish Sign Language.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  9. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  10. Introduction: Sign Language, Sustainable Development, and Equal Opportunities

    Science.gov (United States)

    De Clerck, Goedele A. M.

    2017-01-01

    This article has been excerpted from "Introduction: Sign Language, Sustainable Development, and Equal Opportunities" (De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck & P. V. Paul (Eds.) 2016). The idea of exploring various…

  11. Legal and Ethical Imperatives for Using Certified Sign Language Interpreters in Health Care Settings: How to "Do No Harm" When "It's (All) Greek" (Sign Language) to You.

    Science.gov (United States)

    Nonaka, Angela M

    2016-09-01

    Communication obstacles in health care settings adversely impact patient-practitioner interactions by impeding service efficiency, reducing mutual trust and satisfaction, or even endangering health outcomes. When interlocutors are separated by language, interpreters are required. The efficacy of interpreting, however, is constrained not just by interpreters' competence but also by health care providers' facility working with interpreters. Deaf individuals whose preferred form of communication is a signed language often encounter communicative barriers in health care settings. In those environments, signing Deaf people are entitled to equal communicative access via sign language interpreting services according to the Americans with Disabilities Act and Executive Order 13166, the Limited English Proficiency Initiative. Yet, litigation in states across the United States suggests that individual and institutional providers remain uncertain about their legal obligations to provide equal communicative access. This article discusses the legal and ethical imperatives for using professionally certified (vs. ad hoc) sign language interpreters in health care settings. First outlining the legal terrain governing provision of sign language interpreting services, the article then describes different types of "sign language" (e.g., American Sign Language vs. manually coded English) and different forms of "sign language interpreting" (e.g., interpretation vs. transliteration vs. translation; simultaneous vs. consecutive interpreting; individual vs. team interpreting). This is followed by reviews of the formal credentialing process and of specialized forms of sign language interpreting-that is, certified deaf interpreting, trilingual interpreting, and court interpreting. After discussing practical steps for contracting professional sign language interpreters and addressing ethical issues of confidentiality, this article concludes by offering suggestions for working more effectively

  12. Children creating language: how Nicaraguan sign language acquired a spatial grammar.

    Science.gov (United States)

    Senghas, A; Coppola, M

    2001-07-01

    It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.

  13. Sign Language and Language Acquisition in Man and Ape. New Dimensions in Comparative Pedolinguistics.

    Science.gov (United States)

    Peng, Fred C. C., Ed.

    A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…

  14. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  15. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  16. Neural correlates of British sign language comprehension: spatial processing demands of topographic language.

    Science.gov (United States)

    MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J

    2002-10-01

    In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.

  17. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  18. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  19. About using serious games to teach (Portuguese) sign language

    OpenAIRE

    Gameiro, João Manuel Ferreira

    2014-01-01

    Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar soluti...

  20. New Perspectives on the History of American Sign Language

    Science.gov (United States)

    Shaw, Emily; Delaporte, Yves

    2011-01-01

    Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…

  1. Deficits in narrative abilities in child British Sign Language users with specific language impairment.

    Science.gov (United States)

    Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary

    2014-01-01

    This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.

  2. ERP correlates of German Sign Language processing in deaf native signers.

    Science.gov (United States)

    Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte

    2014-05-10

    The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.

  3. The emergence of temporal language in Nicaraguan Sign Language.

    Science.gov (United States)

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-11-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Language Policies in Uruguay and Uruguayan Sign Language (LSU)

    Science.gov (United States)

    Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso

    2012-01-01

    In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…

  5. The "SignOn"-Model for Teaching Written Language to Deaf People

    Directory of Open Access Journals (Sweden)

    Marlene Hilzensauer

    2012-08-01

    Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.

  6. Discourses of prejudice in the professions: the case of sign languages.

    Science.gov (United States)

    Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott

    2017-09-01

    There is no evidence that learning a natural human language is cognitively harmful to children. To the contrary, multilingualism has been argued to be beneficial to all. Nevertheless, many professionals advise the parents of deaf children that their children should not learn a sign language during their early years, despite strong evidence across many research disciplines that sign languages are natural human languages. Their recommendations are based on a combination of misperceptions about (1) the difficulty of learning a sign language, (2) the effects of bilingualism, and particularly bimodalism, (3) the bona fide status of languages that lack a written form, (4) the effects of a sign language on acquiring literacy, (5) the ability of technologies to address the needs of deaf children and (6) the effects that use of a sign language will have on family cohesion. We expose these misperceptions as based in prejudice and urge institutions involved in educating professionals concerned with the healthcare, raising and educating of deaf children to include appropriate information about first language acquisition and the importance of a sign language for deaf children. We further urge such professionals to advise the parents of deaf children properly, which means to strongly advise the introduction of a sign language as soon as hearing loss is detected. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  7. Legal Pathways to the Recognition of Sign Languages: A Comparison of the Catalan and Spanish Sign Language Acts

    Science.gov (United States)

    Quer, Josep

    2012-01-01

    Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…

  8. A Kinect based sign language recognition system using spatio-temporal features

    Science.gov (United States)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  9. Dictionary of the Slovenian Sign Language on the WWW

    OpenAIRE

    Cempre, Luka; Bešir, Aleksander; Solina, Franc

    2013-01-01

    The article describes technical and user-interface issues of transferring the contents and functionality of the CD-ROM version of the Slovenian sing language dictionary to the web. The dictionary of Slovenian sign language consist of video clips showing the demonstra- tion of signs that deaf people use for communication, text description of the words corresponding to the signs and pictures illustrating the same word/sign. A new technical solution—a video sprite—for concatenating subsections o...

  10. The Birth and Rebirth of "Sign Language Studies"

    Science.gov (United States)

    Armstrong, David F.

    2012-01-01

    As most readers of this journal are aware, "Sign Language Studies" ("SLS") served for many years as effectively the only serious scholarly outlet for work in the nascent field of sign language linguistics. Now reaching its 40th anniversary, the journal was founded by William C. Stokoe and then edited by him for the first quarter century of its…

  11. Signs of Resistance: Peer Learning of Sign Languages within "Oral" Schools for the Deaf

    Science.gov (United States)

    Anglin-Jaffe, Hannah

    2013-01-01

    This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…

  12. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  13. How to describe mouth patterns in the Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Boye Niemela, Janne

    2008-01-01

    The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...

  14. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  15. Sign language in dental education-A new nexus.

    Science.gov (United States)

    Jones, T; Cumberbatch, K

    2017-08-14

    The introduction of the landmark mandatory teaching of sign language to undergraduate dental students at the University of the West Indies (UWI), Mona Campus in Kingston, Jamaica, to bridge the communication gap between dentists and their patients is reviewed. A review of over 90 Doctor of Dental Surgery and Doctor of Dental Medicine curricula in North America, the United Kingdom, parts of Europe and Australia showed no inclusion of sign language in those curricula as a mandatory component. In Jamaica, the government's training school for dental auxiliaries served as the forerunner to the UWI's introduction of formal training of sign language in 2012. Outside of the UWI, a couple of dental schools have sign language courses, but none have a mandatory programme as the one at the UWI. Dentists the world over have had to rely on interpreters to sign with their deaf patients. The deaf in Jamaica have not appreciated the fact that dentists cannot sign and they have felt insulted and only go to the dentist in emergency situations. The mandatory inclusion of sign language in the Undergraduate Dental Programme curriculum at The University of the West Indies, Mona Campus, sought to establish a direct communication channel to formally bridge this gap. The programme of two sign language courses and a direct clinical competency requirement was developed during the second year of the first cohort of the newly introduced undergraduate dental programme through a collaborating partnership between two faculties on the Mona Campus. The programme was introduced in 2012 in the third year of the 5-year undergraduate dental programme. To date, two cohorts have completed the programme, and the preliminary findings from an ongoing clinical study have shown a positive impact on dental care access and dental treatment for deaf patients at the UWI Mona Dental Polyclinic. The development of a direct communication channel between dental students and the deaf that has led to increased dental

  16. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  17. Sign language processing and the mirror neuron system.

    Science.gov (United States)

    Corina, David P; Knapp, Heather

    2006-05-01

    In this paper we review evidence for frontal and parietal lobe involvement in sign language comprehension and production, and evaluate the extent to which these data can be interpreted within the context of a mirror neuron system for human action observation and execution. We present data from three literatures--aphasia, cortical stimulation, and functional neuroimaging. Generally, we find support for the idea that sign language comprehension and production can be viewed in the context of a broadly-construed frontal-parietal human action observation/execution system. However, sign language data cannot be fully accounted for under a strict interpretation of the mirror neuron system. Additionally, we raise a number of issues concerning the lack of specificity in current accounts of the human action observation/execution system.

  18. THE BENEFIT OF EARLY EXPOSURE TO SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Ljubica PRIBANIKJ

    2009-11-01

    Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.

  19. Sign language indexation within the MPEG-7 framework

    Science.gov (United States)

    Zaharia, Titus; Preda, Marius; Preteux, Francoise J.

    1999-06-01

    In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.

  20. South African Sign Language and language-in-education policy in ...

    African Journals Online (AJOL)

    KATEVG

    As this passage suggests, there is extensive and growing literature, both in .... For instance, sign language mediates experience in a unique way, as of ..... entail Deaf students studying together, in a setting not unlike that provided by residential .... of ASL as a foreign language option in secondary schools and universities.

  1. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  2. Flemish Sign Language Standardisation

    Science.gov (United States)

    Van Herreweghe, Mieke; Vermeerbergen, Myriam

    2009-01-01

    In 1997, the Flemish Deaf community officially rejected standardisation of Flemish Sign Language. It was a bold choice, which at the time was not in line with some of the decisions taken in the neighbouring countries. In this article, we shall discuss the choices the Flemish Deaf community has made in this respect and explore why the Flemish Deaf…

  3. Question-Answer Pairs in Sign Language of the Netherlands

    NARCIS (Netherlands)

    Kimmelman, V.; Vink, L.

    2017-01-01

    Several sign languages of the world utilize a construction that consists of a question followed by an answer, both of which are produced by the same signer. For American Sign Language, this construction has been analyzed as a discourse-level rhetorical question construction (Hoza et al. 1997), as a

  4. Towards a Transcription System of Sign Language for 3D Virtual Agents

    Science.gov (United States)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  5. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases.

    Science.gov (United States)

    Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland

    2015-05-12

    According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.

  6. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  7. Standardizing Chinese Sign Language for Use in Post-Secondary Education

    Science.gov (United States)

    Lin, Christina Mien-Chun; Gerner de Garcia, Barbara; Chen-Pichler, Deborah

    2009-01-01

    There are over 100 languages in China, including Chinese Sign Language. Given the large population and geographical dispersion of the country's deaf community, sign variation is to be expected. Language barriers due to lexical variation may exist for deaf college students in China, who often live outside their home regions. In presenting an…

  8. FORMS OF HAND IN SIGN LANGUAGE IN BOSNIA AND HERZEGOVINA

    Directory of Open Access Journals (Sweden)

    Husnija Hasanbegović

    2013-05-01

    Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.

  9. Segmentation of British Sign Language (BSL): mind the gap!

    Science.gov (United States)

    Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.

  10. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  11. Linguistic Policies, Linguistic Planning, and Brazilian Sign Language in Brazil

    Science.gov (United States)

    de Quadros, Ronice Muller

    2012-01-01

    This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…

  12. Recognition of Indian Sign Language in Live Video

    Science.gov (United States)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  13. Technology to Support Sign Language for Students with Disabilities

    Science.gov (United States)

    Donne, Vicki

    2013-01-01

    This systematic review of the literature provides a synthesis of research on the use of technology to support sign language. Background research on the use of sign language with students who are deaf/hard of hearing and students with low incidence disabilities, such as autism, intellectual disability, or communication disorders is provided. The…

  14. COMPARATIVE ANALYSIS OF THE STRUCTURE OF THE AMERICAN AND MACEDONIAN SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Aleksandra KAROVSKA RISTOVSKA

    2014-09-01

    Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.

  15. The Use of Sign Language Pronouns by Native-Signing Children with Autism

    Science.gov (United States)

    Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen

    2015-01-01

    We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…

  16. Methodological and Theoretical Issues in the Adaptation of Sign Language Tests: An Example from the Adaptation of a Test to German Sign Language

    Science.gov (United States)

    Haug, Tobias

    2012-01-01

    Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…

  17. The Effect of New Technologies on Sign Language Research

    Science.gov (United States)

    Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam

    2013-01-01

    This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…

  18. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  19. Identifying Overlapping Language Communities: The Case of Chiriquí and Panamanian Signed Languages

    Science.gov (United States)

    Parks, Elizabeth S.

    2016-01-01

    In this paper, I use a holographic metaphor to explain the identification of overlapping sign language communities in Panama. By visualizing Panama's complex signing communities as emitting community "hotspots" through social drama on multiple stages, I employ ethnographic methods to explore overlapping contours of Panama's sign language…

  20. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.

    Science.gov (United States)

    Hall, Wyatte C

    2017-05-01

    A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.

  1. Computerized Sign Language-Based Literacy Training for Deaf and Hard-of-Hearing Children

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2017-01-01

    Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further,…

  2. Evidence for a perception of prosodic cues in bat communication: contact call classification by Megaderma lyra.

    Science.gov (United States)

    Janssen, Simone; Schmidt, Sabine

    2009-07-01

    The perception of prosodic cues in human speech may be rooted in mechanisms common to mammals. The present study explores to what extent bats use rhythm and frequency, typically carrying prosodic information in human speech, for the classification of communication call series. Using a two-alternative, forced choice procedure, we trained Megaderma lyra to discriminate between synthetic contact call series differing in frequency, rhythm on level of calls and rhythm on level of call series, and measured the classification performance for stimuli differing in only one, or two, of the above parameters. A comparison with predictions from models based on one, combinations of two, or all, parameters revealed that the bats based their decision predominantly on frequency and in addition on rhythm on the level of call series, whereas rhythm on level of calls was not taken into account in this paradigm. Moreover, frequency and rhythm on the level of call series were evaluated independently. Our results show that parameters corresponding to prosodic cues in human languages are perceived and evaluated by bats. Thus, these necessary prerequisites for a communication via prosodic structures in mammals have evolved far before human speech.

  3. Linearization of weak hand holds in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2017-01-01

    Russian Sign Language (RSL) makes use of constructions involving manual simultaneity, in particular, weak hand holds, where one hand is being held in the location and configuration of a sign, while the other simultaneously produces one sign or a sequence of several signs. In this paper, I argue that

  4. Poetry in South African Sign Language: What is different? | Baker ...

    African Journals Online (AJOL)

    Log in or Register to get access to full text downloads. ... Poetry in a sign language can make use of literary devices just as poetry in a ... This poem illustrates well the multi-layered meaning that can be created in sign language poetry through ...

  5. Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hee-Deok Yang

    2014-12-01

    Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  6. Language and Literacy Acquisition through Parental Mediation in American Sign Language

    Science.gov (United States)

    Bailes, Cynthia Neese; Erting, Lynne C.; Thumann-Prezioso, Carlene; Erting, Carol J.

    2009-01-01

    This longitudinal case study examined the language and literacy acquisition of a Deaf child as mediated by her signing Deaf parents during her first three years of life. Results indicate that the parents' interactions with their child were guided by linguistic and cultural knowledge that produced an intuitive use of child-directed signing (CDSi)…

  7. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  8. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    Science.gov (United States)

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  9. Lexical prediction via forward models: N400 evidence from German Sign Language.

    Science.gov (United States)

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.

  10. Italian Sign Language (LIS) Poetry: Iconic Properties and Structural Regularities.

    Science.gov (United States)

    Russo, Tommaso; Giuranna, Rosaria; Pizzuto, Elena

    2001-01-01

    Explores and describes from a crosslinguistic perspective, some of the major structural irregularities that characterize poetry in Italian Sign Language and distinguish poetic from nonpoetic texts. Reviews findings of previous studies of signed language poetry, and points out issues that need to be clarified to provide a more accurate description…

  11. Lexical Properties of Slovene Sign Language: A Corpus-Based Study

    Science.gov (United States)

    Vintar, Špela

    2015-01-01

    Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…

  12. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2016-01-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…

  13. Cross-Linguistic Differences in the Neural Representation of Human Language: Evidence from Users of Signed Languages

    Science.gov (United States)

    Corina, David P.; Lawyer, Laurel A.; Cates, Deborah

    2013-01-01

    Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624

  14. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  15. Everyday activities and social contacts among older deaf sign language users

    DEFF Research Database (Denmark)

    Werngren-Elgström, Monica; Brandt, Ase; Iwarsson, Susanne

    2006-01-01

    The purpose of this study was to describe the everyday activities and social contacts among older deaf sign language users, and to investigate relationships between these phenomena and the health and well-being within this group. The study population comprised deaf sign language users, 65 years...... or older, in Sweden. Data collection was based on interviews in sign language, including open-ended questions covering everyday activities and social contacts as well as self-rated instruments measuring aspects of health and subjective well-being. The results demonstrated that the group of participants...... aspects of health and subjective well-being and the frequency of social contacts with family/relatives or visiting the deaf club and meeting friends. It is concluded that the variety of activities at the deaf clubs are important for the subjective well-being of older deaf sign language users. Further...

  16. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    Science.gov (United States)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  17. Sign Language Echolalia in Deaf Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shield, Aaron; Cooley, Frances; Meier, Richard P.

    2017-01-01

    Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…

  18. Facilitating Exposure to Sign Languages of the World: The Case for Mobile Assisted Language Learning

    Science.gov (United States)

    Parton, Becky Sue

    2014-01-01

    Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…

  19. Information Transfer Capacity of Articulators in American Sign Language.

    Science.gov (United States)

    Malaia, Evie; Borneman, Joshua D; Wilbur, Ronnie B

    2018-03-01

    The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.

  20. Evaluating Effects of Language Recognition on Language Rights and the Vitality of New Zealand Sign Language

    Science.gov (United States)

    McKee, Rachel Locker; Manning, Victoria

    2015-01-01

    Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…

  1. A Sign Language Screen Reader for Deaf

    Science.gov (United States)

    El Ghoul, Oussama; Jemni, Mohamed

    Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.

  2. Proactive Interference & Language Change in Hearing Adult Students of American Sign Language.

    Science.gov (United States)

    Hoemann, Harry W.; Kreske, Catherine M.

    1995-01-01

    Describes a study that found, contrary to previous reports, that a strong, symmetrical release from proactive interference (PI) is the normal outcome for switches between American Sign Language (ASL) signs and English words and with switches between Manual and English alphabet characters. Subjects were college students enrolled in their first ASL…

  3. Effects of Iconicity and Semantic Relatedness on Lexical Access in American Sign Language

    Science.gov (United States)

    Bosworth, Rain G.; Emmorey, Karen

    2010-01-01

    Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…

  4. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    Science.gov (United States)

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.

  5. Input Processing at First Exposure to a Sign Language

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…

  6. The morphosyntax of verbs of motion in serial constructions: a crosslinguistic study in three signed languages

    NARCIS (Netherlands)

    Benedicto, E.; Cvejanov, S.; Quer, J.; Quer, J.F.

    2008-01-01

    This paper provides a comparative analysis of the structural properties of serial verb constructions (SVC) in three sign languages: LSA (Lengua de Señas Argentina, Argentinean Sign Language), LSC (Llengua de Signes Catalana, Catalan Sign Language) and ASL (American Sign Language). The paper presents

  7. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  8. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    Science.gov (United States)

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  9. Tools for language: patterned iconicity in sign language nouns and verbs.

    Science.gov (United States)

    Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon

    2015-01-01

    When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.

  10. Independent transmission of sign language interpreter in DVB: assessment of image compression

    Science.gov (United States)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  11. Static sign language recognition using 1D descriptors and neural networks

    Science.gov (United States)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  12. Languages Are More than Words: Spanish and American Sign Language in Early Childhood Settings

    Science.gov (United States)

    Sherman, Judy; Torres-Crespo, Marisel N.

    2015-01-01

    Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.

  13. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  14. Comic Books: A Learning Tool for Meaningful Acquisition of Written Sign Language

    Science.gov (United States)

    Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.

    2018-01-01

    Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…

  15. Child Modifiability as a Predictor of Language Abilities in Deaf Children Who Use American Sign Language.

    Science.gov (United States)

    Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary

    2015-08-01

    This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.

  16. South African sign language human-computer interface in the context of the national accessibility portal

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2006-02-01

    Full Text Available example, between a deaf person who can sign and an able person or a person with a different disability who cannot sign). METHODOLOGY A signing avatar is set up to work together with a chatterbot. The chatterbot is a natural language dialogue interface... are then offered in sign language as the replies are interpreted by a signing avatar, a living character that can reproduce human-like gestures and expressions. To make South African Sign Language (SASL) available digitally, computational models of the language...

  17. The semantics of prosody: acoustic and perceptual evidence of prosodic correlates to word meaning.

    Science.gov (United States)

    Nygaard, Lynne C; Herold, Debora S; Namy, Laura L

    2009-01-01

    This investigation examined whether speakers produce reliable prosodic correlates to meaning across semantic domains and whether listeners use these cues to derive word meaning from novel words. Speakers were asked to produce phrases in infant-directed speech in which novel words were used to convey one of two meanings from a set of antonym pairs (e.g., big/small). Acoustic analyses revealed that some acoustic features were correlated with overall valence of the meaning. However, each word meaning also displayed a unique acoustic signature, and semantically related meanings elicited similar acoustic profiles. In two perceptual tests, listeners either attempted to identify the novel words with a matching meaning dimension (picture pair) or with mismatched meaning dimensions. Listeners inferred the meaning of the novel words significantly more often when prosody matched the word meaning choices than when prosody mismatched. These findings suggest that speech contains reliable prosodic markers to word meaning and that listeners use these prosodic cues to differentiate meanings. That prosody is semantic suggests a reconceptualization of traditional distinctions between linguistic and nonlinguistic properties of spoken language. Copyright © 2009 Cognitive Science Society, Inc.

  18. Selected Lexical Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette

    2012-01-01

    This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…

  19. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  20. Comprehending Sentences with the Body: Action Compatibility in British Sign Language?

    Science.gov (United States)

    Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella

    2017-01-01

    Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion…

  1. Ideologies and Attitudes toward Sign Languages: An Approximation

    Science.gov (United States)

    Krausneker, Verena

    2015-01-01

    Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.

  2. Graph theoretical analysis of functional network for comprehension of sign language.

    Science.gov (United States)

    Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng

    2017-09-15

    Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  4. Continuous Chinese sign language recognition with CNN-LSTM

    Science.gov (United States)

    Yang, Su; Zhu, Qing

    2017-07-01

    The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.

  5. Poetry in South African Sign Language: What is different?

    African Journals Online (AJOL)

    Mary Theresa Biberauer

    The study of literary expression in sign languages has increased over the last twenty .... extensively to express emotion on the part of a character in the narrative. ... township in her non-manual facial expressions while signing manually what is ...

  6. Generation of Signs within Semantic and Phonological Categories: Data from Deaf Adults and Children Who Use American Sign Language

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.

    2017-01-01

    Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…

  7. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    Science.gov (United States)

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  8. Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages

    Science.gov (United States)

    Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca

    2010-01-01

    Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…

  9. Observations on Word Order in Saudi Arabian Sign Language

    Science.gov (United States)

    Sprenger, Kristen; Mathur, Gaurav

    2012-01-01

    This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…

  10. The assessment and treatment of prosodic disorders and neurological theories of prosody.

    Science.gov (United States)

    Diehl, Joshua J; Paul, Rhea

    2009-08-01

    In this article, we comment on specific aspects of Peppé (Peppé, 2009). In particular, we address the assessment and treatment of prosody in clinical settings and discuss current theory on neurological models of prosody. We argue that in order for prosodic assessment instruments and treatment programs to be clinical effective, we need assessment instruments that: (1) have a representative normative comparison sample and strong psychometric properties; (2) are based on empirical information regarding the typical sequence of prosodic acquisition and are sensitive to developmental change; (3) meaningfully subcategorize various aspects of prosody; (4) use tasks that have ecological validity; and (5) have clinical properties, such as length and ease of administration, that allow them to become part of standard language assessment batteries. In addition, we argue that current theories of prosody processing in the brain are moving toward network models that involve multiple brain areas and are crucially dependent on cortical communication. The implications of these observations for future research and clinical practice are outlined.

  11. Medical Signbank as a Model for Sign Language Planning? A Review of Community Engagement

    Science.gov (United States)

    Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor

    2015-01-01

    This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…

  12. American Sign Language Syntax and Analogical Reasoning Skills Are Influenced by Early Acquisition and Age of Entry to Signing Schools for the Deaf.

    Science.gov (United States)

    Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert

    2016-01-01

    Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.

  13. Robust emotion recognition using spectral and prosodic features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.

  14. Neural Basis of Action Understanding: Evidence from Sign Language Aphasia.

    Science.gov (United States)

    Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory

    2013-01-01

    The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.

  15. Why Doesn't Everyone Here Speak Sign Language? Questions of Language Policy, Ideology and Economics

    Science.gov (United States)

    Rayman, Jennifer

    2009-01-01

    This paper is a thought experiment exploring the possibility of establishing universal bilingualism in Sign Languages. Focusing in the first part on historical examples of inclusive signing societies such as Martha's Vineyard, the author suggests that it is not possible to create such naturally occurring practices of Sign Bilingualism in societies…

  16. Executive Functions and Prosodic Abilities in Children With High-Functioning Autism

    Directory of Open Access Journals (Sweden)

    Marisa G. Filipe

    2018-03-01

    Full Text Available Little is known about the relationship between prosodic abilities and executive function skills. As deficits in executive functions (EFs and prosodic impairments are characteristics of autism, we examined how EFs are related to prosodic performance in children with high-functioning autism (HFA. Fifteen children with HFA (M = 7.4 years; SD = 1.12, matched to 15 typically developing peers on age, gender, and non-verbal intelligence participated in the study. The Profiling Elements of Prosody in Speech-Communication (PEPS-C was used to assess prosodic performance. The Children’s Color Trails Test (CCTT-1, CCTT-2, and CCTT Interference Index was used as an indicator of executive control abilities. Our findings suggest no relation between prosodic abilities and visual search and processing speed (assessed by CCTT-1, but a significant link between prosodic skills and divided attention, working memory/sequencing, set-switching, and inhibition (assessed by CCTT-2 and CCTT Interference Index. These findings may be of clinical relevance since difficulties in EFs and prosodic deficits are characteristic of many neurodevelopmental disorders. Future studies are needed to further investigate the nature of the relationship between impaired prosody and executive (dysfunction.

  17. Phonological Development in Hearing Learners of a Sign Language: The Influence of Phonological Parameters, Sign Complexity, and Iconicity

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…

  18. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Social construction of American sign language--English interpreters.

    Science.gov (United States)

    McDermid, Campbell

    2009-01-01

    Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English interpreters, such as learners' age and education and the characteristics of good citizens within the Deaf community. According to the participants, younger students were adept at language acquisition, whereas older learners more readily understood the purpose of lessons. Children of deaf adults were seen as more culturally aware. The participants' beliefs echoed the theories of P. Freire (1970/1970) that educators consider the reality of each student and their praxis and were responsible for facilitating student self-awareness. Important characteristics in the social construction of students included independence, an appropriate attitude, an understanding of Deaf culture, ethical behavior, community involvement, and a willingness to pursue lifelong learning.

  20. The Link between Form and Meaning in American Sign Language: Lexical Processing Effects

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2009-01-01

    Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…

  1. The Sign Language Situation in Mali

    Science.gov (United States)

    Nyst, Victoria

    2015-01-01

    This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…

  2. Word Order in Russian Sign Language

    Science.gov (United States)

    Kimmelman, Vadim

    2012-01-01

    In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…

  3. The effect of L1 prosodic backgrounds of Cantonese and Japanese speakers on the perception of Mandarin tones after training

    Science.gov (United States)

    So, Connie K.

    2005-04-01

    The present study investigated to what extent ones' L1 prosodic backgrounds affect their learning of a new tonal system. The question as to whether native speakers of a tone language perform differently from those of a pitch accent language will be addressed. Twenty native speakers of Hong Kong Cantonese (a tone language) and Japanese (a pitch accent language) were assigned to two groups. All of them had had no prior knowledge of Mandarin, and had never received any form of musical training before they participated in the study. Their performance of the identification of Mandarin tones before and after a short-term training was compared. Analysis of listeners' tonal confusions in the pretest, posttest, and generalization tests revealed that both Cantonese and Japanese listeners had more confusion for two contrastive tone pairs: Tone 1-Tone 4, and Tone 2-Tone 3. Moreover, Cantonese speakers consistently had greater difficulty than Japanese speakers in distinguishing the tones in each pair. These imply that listeners L1 prosodic backgrounds are at work during the process of learning a new tonal system. The findings will be further discussed in terms of the Perceptual Assimilation Model (Best, 1995). [Work supported by SSHRC.

  4. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  5. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture.

    Science.gov (United States)

    Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne

    2015-09-15

    Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.

  6. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Prior knowledge of deaf students fluent in brazilian sign languages regarding the algebraic language in high school

    Directory of Open Access Journals (Sweden)

    Silvia Teresinha Frizzarini

    2014-06-01

    Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.

  8. Where "Sign Language Studies" Has Led Us in Forty Years: Opening High School and University Education for Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation

    Science.gov (United States)

    Woodward, James; Hoa, Nguyen Thi

    2012-01-01

    This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…

  9. Deriving prosodic structures

    NARCIS (Netherlands)

    Günes, Güliz

    2015-01-01

    When we speak, we speak in prosodic chunks. That is, in the speech flow, we produce sound strings that are systematically parsed into intonational units. The parsing procedure not only eases the production of the speaker, but it also provides the hearer clues about how units of meaning interact in

  10. Signing Earth Science: Accommodations for Students Who Are Deaf or Hard of Hearing and Whose First Language Is Sign

    Science.gov (United States)

    Vesel, J.; Hurdich, J.

    2014-12-01

    TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.

  11. Referential shift in Nicaraguan Sign Language: a transition from lexical to spatial devices.

    Science.gov (United States)

    Kocab, Annemarie; Pyers, Jennie; Senghas, Ann

    2014-01-01

    Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.

  12. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI)

    OpenAIRE

    Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik

    2014-01-01

    Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnos...

  13. PROPOSING A LANGUAGE EXPERIENCE AND SELF-ASSESSMENT OF PROFICIENCY QUESTIONNAIRE FOR BILINGUAL BRAZILIAN SIGN LANGUAGE/PORTUGUESE HEARING TEACHERS

    Directory of Open Access Journals (Sweden)

    Ingrid FINGER

    2014-12-01

    Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.

  14. The effect of sign language structure on complex word reading in Chinese deaf adolescents.

    Science.gov (United States)

    Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X

    2015-01-01

    The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.

  15. The effect of sign language structure on complex word reading in Chinese deaf adolescents.

    Directory of Open Access Journals (Sweden)

    Aitao Lu

    Full Text Available The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words, in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2, compound words with one sign (CW-1, and compound words with two signs (CW-2, but not in derivational words with one sign (DW-1, with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.

  16. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. An Intelligent Computer-Based System for Sign Language Tutoring

    Science.gov (United States)

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  18. On language acquisition in speech and sign:development drives combinatorial structure in both modalities

    Directory of Open Access Journals (Sweden)

    Gary eMorgan

    2014-11-01

    Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.

  19. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  20. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    Science.gov (United States)

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  1. Sign Language Translation in State Administration in Germany: Barrier Free Web Accessibility

    OpenAIRE

    Lišková, Kateřina

    2014-01-01

    The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...

  2. Use of Information and Communication Technologies in Sign Language Test Development: Results of an International Survey

    Science.gov (United States)

    Haug, Tobias

    2015-01-01

    Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…

  3. Clinical Focus on Prosodic, Discursive and Pragmatic Treatment for Right Hemisphere Damaged Adults: What's Right?

    Directory of Open Access Journals (Sweden)

    Perrine Ferré

    2011-01-01

    Full Text Available Researchers and clinicians acknowledge today that the contribution of both cerebral hemispheres is necessary to a full and adequate verbal communication. Indeed, it is estimated that at least 50% of right brain damaged individuals display impairments of prosodic, discourse, pragmatics and/or lexical semantics dimensions of communication. Since the 1990's, researchers have focused on the description and the assessment of these impairments and it is only recently that authors have shown interest in planning specific intervention approaches. However, therapists in rehabilitation settings still have very few available tools. This review of recent literature demonstrates that, even though theoretical knowledge needs further methodological investigation, intervention guidelines can be identified to target right hemisphere damage communication impairments in clinical practice. These principles can be incorporated by speech and language pathologists, in a structured intervention framework, aiming at fully addressing prosodic, discursive and pragmatic components of communication.

  4. The corpus-driven revolution in Polish Sign Language: the interview with Dr. Paweł Rutkowski

    Directory of Open Access Journals (Sweden)

    Iztok Kosem

    2018-02-01

    Full Text Available Dr. Paweł Rutkowski is head of the Section for Sign Linguistics at the University of Warsaw. He is a general linguist and a specialist in the field of syntax of natural languages, carrying out research on Polish Sign Language (polski język migowy — PJM. He has been awarded a number of prizes, grants and scholarships by such institutions as the Foundation for Polish Science, Polish Ministry of Science and Higher Education, National Science Centre, Poland, Polish–U.S. Fulbright Commission, Kosciuszko Foundation and DAAD. Dr. Rutkowski leads the team developing the Corpus of Polish Sign Language and the Corpus-based Dictionary of Polish Sign Language, the first dictionary of this language prepared in compliance with modern lexicographical standards. The dictionary is an open-access publication, available freely at the following address: http://www.slownikpjm.uw.edu.pl/en/. This interview took place at eLex 2017, a biennial conference on electronic lexicography, where Dr. Rutkowski was awarded the Adam Kilgarriff Prize and gave a keynote address entitled Sign language as a challenge to electronic lexicography: The Corpus-based Dictionary of Polish Sign Language and beyond. The interview was conducted by Dr. Victoria Nyst from Leiden University, Faculty of Humanities, and Dr. Iztok Kosem from the University of Ljubljana, Faculty of Arts.

  5. The verbal-visual discourse in Brazilian Sign Language – Libras

    Directory of Open Access Journals (Sweden)

    Tanya Felipe

    2013-11-01

    Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.

  6. The non- (existent) native signer: sign language research in a small deaf population

    NARCIS (Netherlands)

    Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,

    2008-01-01

    This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born

  7. Dissociating linguistic and non-linguistic gesture processing: electrophysiological evidence from American Sign Language.

    Science.gov (United States)

    Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David

    2012-04-01

    A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Space and iconicity in German Sign Language (DGS)

    NARCIS (Netherlands)

    Perniss, P.M.

    2007-01-01

    This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and

  9. Brief Report: A Mobile Application to Treat Prosodic Deficits in Autism Spectrum Disorder and Other Communication Impairments: A Pilot Study

    Science.gov (United States)

    Simmons, Elizabeth Schoen; Paul, Rhea; Shic, Frederick

    2016-01-01

    This study examined the acceptability of a mobile application, "SpeechPrompts," designed to treat prosodic disorders in children with ASD and other communication impairments. Ten speech-language pathologists (SLPs) in public schools and 40 of their students, 5-19 years with prosody deficits participated. Students received treatment with…

  10. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    Science.gov (United States)

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  12. ASL-LEX: A lexical database of American Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen

    2017-04-01

    ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .

  13. Kinect-based sign language recognition of static and dynamic hand movements

    Science.gov (United States)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  14. Real-time lexical comprehension in young children learning American Sign Language.

    Science.gov (United States)

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  15. Sign Language Planning in the Netherlands between 1980 and 2010

    Science.gov (United States)

    Schermer, Trude

    2012-01-01

    This article discusses several aspects of language planning with respect to Sign Language of the Netherlands, or Nederlandse Gebarentaal (NGT). For nearly thirty years members of the Deaf community, the Dutch Deaf Council (Dovenschap) have been working together with researchers, several organizations in deaf education, and the organization of…

  16. ALPHABET SIGN LANGUAGE RECOGNITION USING LEAP MOTION TECHNOLOGY AND RULE BASED BACKPROPAGATION-GENETIC ALGORITHM NEURAL NETWORK (RBBPGANN

    Directory of Open Access Journals (Sweden)

    Wijayanti Nurul Khotimah

    2017-01-01

    Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.

  17. Exploring the use of dynamic language assessment with deaf children, who use American Sign Language: Two case studies.

    Science.gov (United States)

    Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary

    2014-01-01

    We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    Science.gov (United States)

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  19. Sign languages and the Common European Framework of Reference for Languages : Descriptors and approaches to assessment

    NARCIS (Netherlands)

    L. Leeson; Dr. Beppie van den Bogaerde; Tobias Haug; C. Rathmann

    2015-01-01

    This resource establishes European standards for sign languages for professional purposes in line with the Common European Framework of Reference for Languages (CEFR) and provides an overview of assessment descriptors and approaches. Drawing on preliminary work undertaken in adapting the CEFR to

  20. Assessing language skills in adult key word signers with intellectual disabilities: Insights from sign linguistics.

    Science.gov (United States)

    Grove, Nicola; Woll, Bencie

    2017-03-01

    Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. HIV/AIDS knowledge among adolescent sign-language users in ...

    African Journals Online (AJOL)

    , particularly sign language users, in HIV-prevention programmes. Keywords: communication, disability, disability studies, hearing impairment, qualitative research, scoping study. African Journal of AIDS Research 2010, 9(3): 307–313 ...

  2. Deaf Students' Receptive and Expressive American Sign Language Skills: Comparisons and Relations

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2014-01-01

    This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…

  3. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC).

    Science.gov (United States)

    Baus, Cristina; Costa, Albert

    2015-06-03

    This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Prosodic Marking of Narrow Focus in Seoul Korean

    Directory of Open Access Journals (Sweden)

    Hae-Sung Jeon

    2017-01-01

    Full Text Available This paper explores prosodic marking of narrow (corrective focus in Seoul Korean. Korean lacks lexical stress and it has a phonologized association between the Accentual Phrase (AP initial segment and intonation. In the experiment, 4 speakers read sentences including a two-item list which were designed to elicit either an L or H AP-initial tone. The durational variations, the pitch events at prosodic boundaries, and 'F''0 'span in 32 sentences read neutrally and 64 sentences read with one of the items under focus were analyzed. The results show that the focused constituent consistently initiates a new prosodic phrase. In comparison to the neutrally spoken or defocused counterpart, the focused constituent was more likely to be realized as an Intonational Phrase (IP in some contexts. Bitonal IP boundary tones were more likely to occur under focus than monotonal tones. In addition, in focused constituents, durational expansion particularly at the phrase-edges, expansion in 'F''0 'span, and raising of the phrase-initial pitch were observed. On the other hand, defocused constituents were not phonetically reduced compared to the neutral counterparts. The results imply that the phonetic cues spreading over the focused constituent complement the exaggerated prosodic boundaries.

  5. Sign language interpreting education : Reflections on interpersonal skills

    NARCIS (Netherlands)

    Hammer, A.; van den Bogaerde, B.; Cirillo, L.; Niemants, N.

    2017-01-01

    We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based predominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the

  6. Deaf children attending different school environments: sign language abilities and theory of mind.

    Science.gov (United States)

    Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia

    2013-01-01

    The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.

  7. Prediction in a visual language: real-time sentence processing in American Sign Language across development.

    Science.gov (United States)

    Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I

    2018-01-01

    Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

  8. Language Justice for Sign Language Peoples: The UN Convention on the Rights of Persons with Disabilities

    Science.gov (United States)

    Batterbury, Sarah C. E.

    2012-01-01

    Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…

  9. Sign Language Legislation as a Tool for Sustainability

    Science.gov (United States)

    Pabsch, Annika

    2017-01-01

    This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.

  10. Sign language interpreting education : Reflections on interpersonal skills

    NARCIS (Netherlands)

    Annemiek Hammer; Dr. Beppie van den Bogaerde

    2017-01-01

    We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based pre-dominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the

  11. Bi-channel Sensor Fusion for Automatic Sign Language Recognition

    DEFF Research Database (Denmark)

    Kim, Jonghwa; Wagner, Johannes; Rehm, Matthias

    2008-01-01

    In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel senso......-independent condition, where subjective differences do not allow for high recognition rates. Finally we discuss a problem of feature-level fusion caused by high disparity between accuracies of each single channel classification....

  12. A study of syllable codas in South African Sign Language

    African Journals Online (AJOL)

    Kate H

    A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...

  13. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    Science.gov (United States)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  14. Making an Online Dictionary of New Zealand Sign Language ...

    African Journals Online (AJOL)

    ... is n example of a contemporary sign language dictionary that leverages the 21st ... informed development of this bilingual, bi-directional, multimedia dictionary. ... and dealing with sociolinguistic variation in the selection and performance of ...

  15. Achieving mutual understanding in Argentine Sign Language (LSA)

    NARCIS (Netherlands)

    Manrique Cordeje, M.E.

    2017-01-01

    How does (mis)understanding works in conversation? Problems of understanding occur all the time in our everyday social life. How does miscommunication happen and how do we deal with it? This thesis reports on how sign language users manage to understand each other based on a large Conversational

  16. Brief Report: A Mobile Application to Treat Prosodic Deficits in Autism Spectrum Disorder and Other Communication Impairments: A Pilot Study.

    Science.gov (United States)

    Simmons, Elizabeth Schoen; Paul, Rhea; Shic, Frederick

    2016-01-01

    This study examined the acceptability of a mobile application, SpeechPrompts, designed to treat prosodic disorders in children with ASD and other communication impairments. Ten speech-language pathologists (SLPs) in public schools and 40 of their students, 5-19 years with prosody deficits participated. Students received treatment with the software over eight weeks. Pre- and post-treatment speech samples and student engagement data were collected. Feedback on the utility of the software was also obtained. SLPs implemented the software with their students in an authentic education setting. Student engagement ratings indicated students' attention to the software was maintained during treatment. Although more testing is warranted, post-treatment prosody ratings suggest that SpeechPrompts has potential to be a useful tool in the treatment of prosodic disorders.

  17. Imitated prosodic fluency predicts reading comprehension ability in good and poor high school readers

    Directory of Open Access Journals (Sweden)

    Mara Breen

    2016-07-01

    Full Text Available Researchers have established a relationship between beginning readers’ silent comprehension ability and their prosodic fluency, such that readers who read aloud with appropriate prosody tend to have higher scores on silent reading comprehension assessments. The current study was designed to investigate this relationship in two groups of high school readers: Specifically Poor Comprehenders (SPCs, who have adequate word level and phonological skills but poor reading comprehension ability, and a group of age- and decoding skill-matched controls. We compared the prosodic fluency of the two groups by determining how effectively they produced prosodic cues to syntactic and semantic structure in imitations of a model speaker’s production of syntactically and semantically varied sentences. Analyses of pitch and duration patterns revealed that speakers in both groups produced the expected prosodic patterns; however, controls provided stronger durational cues to syntactic structure. These results demonstrate that the relationship between prosodic fluency and reading comprehension continues past the stage of early reading instruction. Moreover, they suggest that prosodically fluent speakers may also generate more fluent implicit prosodic representations during silent reading, leading to more effective comprehension.

  18. Music Perception Influences Language Acquisition: Melodic and Rhythmic-Melodic Perception in Children with Specific Language Impairment.

    Science.gov (United States)

    Sallat, Stephan; Jentschke, Sebastian

    2015-01-01

    Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic and rhythmic-melodic perception and melody recognition) in a group of children with SLI (N = 29, five-year-olds) to two groups of controls, either of comparable age (N = 39, five-year-olds) or of age closer to the children with SLI in their language skills and about one year younger (N = 13, four-year-olds). Children with SLI performed in most tasks below their age level, closer matching the performance level of younger controls with similar language skills. These data strengthen the view of a strong relation between language acquisition and music processing. This might open a perspective for the possible use of musical material in early diagnosis of SLI and of music in SLI therapy.

  19. Music Perception Influences Language Acquisition: Melodic and Rhythmic-Melodic Perception in Children with Specific Language Impairment

    Science.gov (United States)

    Sallat, Stephan; Jentschke, Sebastian

    2015-01-01

    Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic and rhythmic-melodic perception and melody recognition) in a group of children with SLI (N = 29, five-year-olds) to two groups of controls, either of comparable age (N = 39, five-year-olds) or of age closer to the children with SLI in their language skills and about one year younger (N = 13, four-year-olds). Children with SLI performed in most tasks below their age level, closer matching the performance level of younger controls with similar language skills. These data strengthen the view of a strong relation between language acquisition and music processing. This might open a perspective for the possible use of musical material in early diagnosis of SLI and of music in SLI therapy. PMID:26508812

  20. Music Perception Influences Language Acquisition: Melodic and Rhythmic-Melodic Perception in Children with Specific Language Impairment

    Directory of Open Access Journals (Sweden)

    Stephan Sallat

    2015-01-01

    Full Text Available Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic and rhythmic-melodic perception and melody recognition in a group of children with SLI (N=29, five-year-olds to two groups of controls, either of comparable age (N=39, five-year-olds or of age closer to the children with SLI in their language skills and about one year younger (N=13, four-year-olds. Children with SLI performed in most tasks below their age level, closer matching the performance level of younger controls with similar language skills. These data strengthen the view of a strong relation between language acquisition and music processing. This might open a perspective for the possible use of musical material in early diagnosis of SLI and of music in SLI therapy.

  1. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    Science.gov (United States)

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  2. South African sign language assistive translation

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2008-04-01

    Full Text Available , the fact that the target structure is SASL, the home language of the Deaf user, already facilitates the communication. Ul- timately the message will be delivered more naturally by a signing avatar [14]. We shall present further scenarios for future... Work 6.1 Disambiguation Disambiguation can be improved on two levels: firstly, by eliciting more or better information from the user through the AAC interface and secondly, by improving certain as- pects of the MT system. We discuss both...

  3. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  4. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI).

    Science.gov (United States)

    Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik

    2014-05-22

    There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen's kappa and "observed agreement". There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and "observed agreement" was above 88%. The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language.

  5. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI)

    Science.gov (United States)

    2014-01-01

    Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen’s kappa and “observed agreement”. Results There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and “observed agreement” was above 88%. Conclusion The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language. PMID:24886297

  6. Accessibility perspectives on enabling South African sign language in the South African National Accessibility Portal

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-04-01

    Full Text Available and services. One such mechanism is by embedding animated Sign Language in Web pages. This paper analyses the effectiveness and appropriateness of using this approach by embedding South African Sign Language in the South African National Accessibility Portal...

  7. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    International Nuclear Information System (INIS)

    Vargas, Lorena P; Barba, Leiner; Torres, C O; Mattos, L

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  8. An Interpreter's Interpretation: Sign Language Interpreters' View of Musculoskeletal Disorders

    National Research Council Canada - National Science Library

    Johnson, William L

    2003-01-01

    Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...

  9. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language.

    Science.gov (United States)

    Williams, Joshua T; Newman, Sharlene D

    2016-04-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Visual Iconicity Across Sign Languages: Large-Scale Automated Video Analysis of Iconic Articulators and Locations

    Science.gov (United States)

    Östling, Robert; Börstell, Carl; Courtaux, Servane

    2018-01-01

    We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.

  11. Flusser and the "?" Sign: the musicality of poetry and the limits of language

    Directory of Open Access Journals (Sweden)

    Tiago Hermano Breunig

    2016-09-01

    Full Text Available When inquiring the sign “?”, Flusser postulates that meaning is “one of the main problems of the present times thought.” From the sign above, Flusser differentiates meaning and sense, which defines as “what means”. Thus, the problem of meaning converges with the problem of thought itself, since, according to Flusser, all thoughts come from a tautology, i.e., what “means nothing”. If the understanding of meaning implies the musical aspects of the language, as the sign “?”, according to Flusser, music falls “in the same abyss of tautology” as it overcomes the language limit. Flusser believes that the discussion of language limits contributes to the problem of the meaning of music and confesses that among all the existential signs the “?” is the one that articulates better the situation in which we are. It is in this sense, in this “Stimmung”, as Flusser says about the meaning of the sign “?”, that this paper aims to reflect, from the problem of meaning, on the relationship between music and poetry contemporary to Flusser.

  12. Prosodic Contrasts in Ironic Speech

    Science.gov (United States)

    Bryant, Gregory A.

    2010-01-01

    Prosodic features in spontaneous speech help disambiguate implied meaning not explicit in linguistic surface structure, but little research has examined how these signals manifest themselves in real conversations. Spontaneously produced verbal irony utterances generated between familiar speakers in conversational dyads were acoustically analyzed…

  13. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Prosodic boundaries in writing: Evidence from a keystroke analysis

    Directory of Open Access Journals (Sweden)

    Susanne Fuchs

    2016-11-01

    Full Text Available The aim of the paper is to investigate duration between successive keystrokes during typing in order to examine whether prosodic boundaries are expressed in the process of writing. In particular, we are interested in interkey durations that occur next to punctuation marks (comma and full stops while taking keystrokes between words as a reference, since these punctuation marks are often realized with minor or major prosodic boundaries during reading. A two-part experiment was conducted: first, participants’ keystrokes on a computer keyboard were recorded while writing an email to a close friend (in two conditions: with and without time pressure. Second, participants read the email they just wrote. Interkey durations were compared to pause durations at the same locations during read speech. Results provide evidence of significant differences between interkey durations between words, at commas and at full stops (from shortest to longest. These durations were positively correlated with silent pause durations during reading. A more detailed analysis of interkey durations revealed patterns that can be interpreted with respect to prosodic boundaries in speech production, namely as phrase-final and phrase-initial lengthening occurring at punctuation marks. This work provides initial evidence that prosodic boundaries are reflected in the writing process.

  15. Prosodic Boundaries in Writing: Evidence from a Keystroke Analysis.

    Science.gov (United States)

    Fuchs, Susanne; Krivokapić, Jelena

    2016-01-01

    The aim of the paper is to investigate duration between successive keystrokes during typing in order to examine whether prosodic boundaries are expressed in the process of writing. In particular, we are interested in interkey durations that occur next to punctuation marks (comma and full stops while taking keystrokes between words as a reference), since these punctuation marks are often realized with minor or major prosodic boundaries during overt reading. A two-part experiment was conducted: first, participants' keystrokes on a computer keyboard were recorded while writing an email to a close friend (in two conditions: with and without time pressure). Second, participants read the email they just wrote. Interkey durations were compared to pause durations at the same locations during read speech. Results provide evidence of significant differences between interkey durations between words, at commas and at full stops (from shortest to longest). These durations were positively correlated with silent pause durations during overt reading. A more detailed analysis of interkey durations revealed patterns that can be interpreted with respect to prosodic boundaries in speech production, namely as phrase-final and phrase-initial lengthening occurring at punctuation marks. This work provides initial evidence that prosodic boundaries are reflected in the writing process.

  16. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  17. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  18. Testing Comprehension Abilities in Users of British Sign Language Following Cva

    Science.gov (United States)

    Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.

    2005-01-01

    Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…

  19. Predicting prosodic structure by morphosyntactic category: A case study of Blackfoot

    Directory of Open Access Journals (Sweden)

    Joseph W. Windsor

    2017-02-01

    Full Text Available This study examines phonetic correlates to three prosodic categories in Blackfoot: the syllable (σ, the prosodic word (ω, and the phonological phrase (φ. I provide evidence that the Blackfoot σ is recognizable by an obligatory process of vowel coalescence and the φ is recognizable by an obligatory process of right edge aspiration. The ω can be distinguished from these other two prosodic constituents by an optional phonetic process which mimics intersyllabic vowel coalescence, but does not apply obligatorily. The prosodic categories investigated in this study are then correlated to three morphosyntactic categories: morphological agreement suffixes, lexical morphemes (adjectives and nouns, and demonstratives. This correlation is used to argue that morphological and syntactic processes function differently at the interface with phonology (cf. Russell 1999, ultimately raising questions with “word-internal syntax” analyses of Blackfoot suffixation which are derived through cyclic head movement (Bliss 2013; Wiltschko 2014 using the Mirror Principle (Baker 1985. This article is part of theSpecial Collection: Prosody and costituent structure

  20. ANALYZING THE SPEECH EXPRESSIVENESS USING PROSODIC DYNAMIC CONTROL

    Directory of Open Access Journals (Sweden)

    Valentin Eugen Ghisa

    2018-04-01

    Full Text Available At the level of verbal communication, the prosodic support and emotional space is modelled as a nonlinear system described through some parameters extracted from the spectral model of vocal wave, respectively the outline of the fundamental frequency, the time and energy of sonorous segments, the duration of non-acoustic segments and breaks, the voice timbre etc. Through the discretised addressing of the spectral model, the aim is to optimise the prosodic characteristics extracted from local variations of the fundamental frequency, by a method of dynamic control.

  1. The nature of hemispheric specialization for linguistic and emotional prosodic perception: a meta-analysis of the lesion literature.

    Science.gov (United States)

    Witteman, Jurriaan; van Ijzendoorn, Marinus H; van de Velde, Daan; van Heuven, Vincent J J P; Schiller, Niels O

    2011-11-01

    It is unclear whether there is hemispheric specialization for prosodic perception and, if so, what the nature of this hemispheric asymmetry is. Using the lesion-approach, many studies have attempted to test whether there is hemispheric specialization for emotional and linguistic prosodic perception by examining the impact of left vs. right hemispheric damage on prosodic perception task performance. However, so far no consensus has been reached. In an attempt to find a consistent pattern of lateralization for prosodic perception, a meta-analysis was performed on 38 lesion studies (including 450 left hemisphere damaged patients, 534 right hemisphere damaged patients and 491 controls) of prosodic perception. It was found that both left and right hemispheric damage compromise emotional and linguistic prosodic perception task performance. Furthermore, right hemispheric damage degraded emotional prosodic perception more than left hemispheric damage (trimmed g=-0.37, 95% CI [-0.66; -0.09], N=620 patients). It is concluded that prosodic perception is under bihemispheric control with relative specialization of the right hemisphere for emotional prosodic perception. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Use of prosody and information structure in high functioning adults with Autism in relation to language ability

    Directory of Open Access Journals (Sweden)

    Anne-Marie R DePape

    2012-03-01

    Full Text Available Abnormal prosody is a striking feature of the speech of those with Autism Spectrum Disorder (ASD, but previous reports suggest large variability among those with ASD. Here we show that part of this heterogeneity can be explained by level of language functioning. We recorded semi-spontaneous but controlled conversations in adults with and without Autism Spectrum Disorder and measured features related to pitch and duration to determine (1 general use of prosodic features, (2 prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus as opposed to information already given in the conversational context (topic, and (3 the relation between prosodic use and level of language function. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.

  3. A novel prosodic-information synthesizer based on recurrent fuzzy neural network for the Chinese TTS system.

    Science.gov (United States)

    Lin, Chin-Teng; Wu, Rui-Cheng; Chang, Jyh-Yeong; Liang, Sheng-Fu

    2004-02-01

    In this paper, a new technique for the Chinese text-to-speech (TTS) system is proposed. Our major effort focuses on the prosodic information generation. New methodologies for constructing fuzzy rules in a prosodic model simulating human's pronouncing rules are developed. The proposed Recurrent Fuzzy Neural Network (RFNN) is a multilayer recurrent neural network (RNN) which integrates a Self-cOnstructing Neural Fuzzy Inference Network (SONFIN) into a recurrent connectionist structure. The RFNN can be functionally divided into two parts. The first part adopts the SONFIN as a prosodic model to explore the relationship between high-level linguistic features and prosodic information based on fuzzy inference rules. As compared to conventional neural networks, the SONFIN can always construct itself with an economic network size in high learning speed. The second part employs a five-layer network to generate all prosodic parameters by directly using the prosodic fuzzy rules inferred from the first part as well as other important features of syllables. The TTS system combined with the proposed method can behave not only sandhi rules but also the other prosodic phenomena existing in the traditional TTS systems. Moreover, the proposed scheme can even find out some new rules about prosodic phrase structure. The performance of the proposed RFNN-based prosodic model is verified by imbedding it into a Chinese TTS system with a Chinese monosyllable database based on the time-domain pitch synchronous overlap add (TD-PSOLA) method. Our experimental results show that the proposed RFNN can generate proper prosodic parameters including pitch means, pitch shapes, maximum energy levels, syllable duration, and pause duration. Some synthetic sounds are online available for demonstration.

  4. The "handedness" of language: Directional symmetry breaking of sign usage in words.

    Science.gov (United States)

    Ashraf, Md Izhar; Sinha, Sitabhra

    2018-01-01

    Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon.

  5. The translation of biblical texts into South African Sign Language ...

    African Journals Online (AJOL)

    The translation of biblical texts into South African Sign Language. ... Native signers were used as translators with the assistance of hearing specialists in the fields of religion and translation studies. ... AJOL African Journals Online. HOW TO ...

  6. Depictions and minifiction: a reflection on translation of micro-story as didactics of sign language interpreters training in colombia.

    Directory of Open Access Journals (Sweden)

    Alex Giovanny Barreto

    2015-10-01

    Full Text Available The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distance of Colombia-UNAD. The didactic proposal focus on the model of the efforts (Gile, specifically in the production and listen efforts. A criticism about translating competence is presented. Minifiction is literary genre with multiple semiotic and philosophical translation possibilities. These literary texts have elements with great potential to render on visual, gestural and spatial depictions of Colombian sign language which is profitable to interpreter training and education. Through El Dinosaurio sign language translation, we concludes with an outline and reflections on the pedagogical and didactic potential of minifiction and depictions in the design of training activities in sign language interpreters.

  7. Constructing an Online Test Framework, Using the Example of a Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias; Herman, Rosalind; Woll, Bencie

    2015-01-01

    This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…

  8. The Relationship between American Sign Language Vocabulary and the Development of Language-Based Reasoning Skills in Deaf Children

    Science.gov (United States)

    Henner, Jonathan

    2016-01-01

    The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…

  9. Contrast-Marking Prosodic Emphasis in Williams Syndrome: Results of Detailed Phonetic Analysis

    Science.gov (United States)

    Ito, Kiwako; Martens, Marilee A.

    2017-01-01

    Background: Past reports on the speech production of individuals with Williams syndrome (WS) suggest that their prosody is anomalous and may lead to challenges in spoken communication. While existing prosodic assessments confirm that individuals with WS fail to use prosodic emphasis to express contrast, those reports typically lack detailed…

  10. On Selected Phonological Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Tomita, Nozomi; Kozak, Viola

    2012-01-01

    This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…

  11. Practical low-cost visual communication using binary images for deaf sign language.

    Science.gov (United States)

    Manoranjan, M D; Robinson, J A

    2000-03-01

    Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.

  12. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition.

    Science.gov (United States)

    Janke, Vikki; Marshall, Chloë R

    2017-01-01

    An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the

  13. Translation and interpretation of sign language in the postgraduate context: problematizing positions

    Directory of Open Access Journals (Sweden)

    Luiz Daniel Rodrigues Dinarte

    2015-12-01

    Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.

  14. Using the "Common European Framework of Reference for Languages" to Teach Sign Language to Parents of Deaf Children

    Science.gov (United States)

    Snoddon, Kristin

    2015-01-01

    No formal Canadian curriculum presently exists for teaching American Sign Language (ASL) as a second language to parents of deaf and hard of hearing children. However, this group of ASL learners is in need of more comprehensive, research-based support, given the rapid expansion in Canada of universal neonatal hearing screening and the…

  15. Gesture and Signing in Support of Expressive Language Development

    Science.gov (United States)

    Baker-Ramos, Leslie K.

    2017-01-01

    The purpose of this teacher inquiry is to explore the effects of signing and gesturing on the expressive language development of non-verbal children. The first phase of my inquiry begins with the observations of several non-verbal students with various etiologies in three different educational settings. The focus of these observations is to…

  16. From community training to university training (and vice-versa: new sign language translator and interpreter profile in the brazilian context

    Directory of Open Access Journals (Sweden)

    Vanessa Regina de Oliveira Martins

    2015-12-01

    Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.

  17. From community training to university training (and vice-versa: new sign language translator and interpreter profile in the brazilian context

    Directory of Open Access Journals (Sweden)

    Vanessa Regina de Oliveira Martins

    2015-10-01

    Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.

  18. Assessing Health Literacy in Deaf American Sign Language Users

    Science.gov (United States)

    McKee, Michael M.; Paasche-Orlow, Michael; Winters, Paul C.; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas

    2015-01-01

    Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, healthcare messages, and health care communication, which when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version of the NVS (ASL-NVS). Forty-eight percent of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research. PMID:26513036

  19. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  20. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  1. Development of Geography and Geology Terminology in British Sign Language

    Science.gov (United States)

    Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel

    2016-04-01

    The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.

  2. A Barking Dog That Never Bites? The British Sign Language (Scotland) Bill

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…

  3. The Link between Form and Meaning in British Sign Language: Effects of Iconicity for Phonological Decisions

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2010-01-01

    Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…

  4. Assessment of Sign Language Development: The Case of Deaf Children in the Netherlands

    NARCIS (Netherlands)

    Hermans, D.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2009-01-01

    In this article, we will describe the development of an assessment instrument for Sign Language of the Netherlands (SLN) for deaf children in bilingual education programs. The assessment instrument consists of nine computerized tests in which the receptive and expressive language skills of deaf

  5. The Readiness of Typical Student in Communication By Using Sign Language in Hearing Impairment Integration Programe

    Directory of Open Access Journals (Sweden)

    Mohd Hanafi Mohd Yasin

    2018-05-01

    Full Text Available This research is regarding the readiness of typical student in communication by using sign language in Hearing Impairment Integration Programme. There were 60 typical students from a Special Education Integration Programme of secondary school in Malacca were chosen as research respondents. The instrument of the research was a set of questionnaire which consisted of four parts, namely Student’s demography (Part A, Student’s knowledge (Part B, Student’s ability to communicate (Part C and Student’s interest to communicate (Part D. The questionnaire was adapted from the research of Asnul Dahar and Rabiah's 'The Readiness of Students in Following Vocational Subjects at Jerantut District, Rural Secondary School in Pahang'.  Descriptive analysis was used to analysis the data. Mean score was used to determine the level of respondents' perception of each question. The findings showed a positive relationship between typical students towards communication medium by using sign language. Typical students were seen to be interested in communicating using sign language and were willing to attend the Sign Language class if offered.

  6. Comparing visualization techniques for learning second language prosody

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Alm, Maria Helena; Schümchen, Nathalie

    2017-01-01

    We tested the usability of prosody visualization techniques for second language (L2) learners. Eighteen Danish learners realized target sentences in German based on different visualization techniques. The sentence realizations were annotated by means of the phonological Kiel Intonation Model...... and then analyzed in terms of (a) prosodic-pattern consistency and (b) correctness of the prosodic patterns. In addition, the participants rated the usability of the visualization techniques. The results from the phonological analysis converged with the usability ratings in showing that iconic techniques......, in particular the stylized “hat pattern” visualization, performed better than symbolic techniques, and that marking prosodic information beyond intonation can be more confusing than instructive. In discussing our findings, we also provide a description of the new Danish-German learner corpus we created: DANGER...

  7. The “handedness” of language: Directional symmetry breaking of sign usage in words

    Science.gov (United States)

    2018-01-01

    Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon. PMID:29342176

  8. How Deaf American Sign Language/English Bilingual Children Become Proficient Readers: An Emic Perspective

    Science.gov (United States)

    Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.

    2014-01-01

    A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…

  9. Variation in handshape and orientation in British Sign Language: The case of the ‘1’ hand configuration

    Science.gov (United States)

    Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy

    2013-01-01

    This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018

  10. Immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention: a mismatch negativity study.

    Science.gov (United States)

    Li, X; Yang, Y; Ren, G

    2009-06-16

    Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.

  11. Response bias reveals enhanced attention to inferior visual field in signers of American Sign Language.

    Science.gov (United States)

    Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C

    2016-04-01

    Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.

  12. The Neural Correlates of Highly Iconic Structures and Topographic Discourse in French Sign Language as Observed in Six Hearing Native Signers

    Science.gov (United States)

    Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.

    2010-01-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…

  13. Mexican sign language recognition using normalized moments and artificial neural networks

    Science.gov (United States)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  14. Cerebral organization of oral and signed language responses: case study evidence from amytal and cortical stimulation studies.

    Science.gov (United States)

    Mateer, C A; Rapport, R L; Kettrick, C

    1984-01-01

    A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.

  15. The Effect of Sign Language Rehearsal on Deaf Subjects' Immediate and Delayed Recall of English Word Lists.

    Science.gov (United States)

    Bonvillian, John D.; And Others

    1987-01-01

    The relationship between sign language rehearsal and written free recall was examined by having deaf college students rehearse the sign language equivalents of printed English words. Studies of both immediate and delayed memory suggested that word recall increased as a function of total rehearsal frequency and frequency of appearance in rehearsal…

  16. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    Science.gov (United States)

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  17. An Investigation into the Relationship of Foreign Language Learning Motivation and Sign Language Use among Deaf and Hard of Hearing Hungarians

    Science.gov (United States)

    Kontra, Edit H.; Csizer, Kata

    2013-01-01

    The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…

  18. Emergency Department utilization among Deaf American Sign Language users.

    Science.gov (United States)

    McKee, Michael M; Winters, Paul C; Sen, Ananda; Zazove, Philip; Fiscella, Kevin

    2015-10-01

    Deaf American Sign Language (ASL) users comprise a linguistic minority population with poor health care access due to communication barriers and low health literacy. Potentially, these health care barriers could increase Emergency Department (ED) use. To compare ED use between deaf and non-deaf patients. A retrospective cohort from medical records. The sample was derived from 400 randomly selected charts (200 deaf ASL users and 200 hearing English speakers) from an outpatient primary care health center with a high volume of deaf patients. Abstracted data included patient demographics, insurance, health behavior, and ED use in the past 36 months. Deaf patients were more likely to be never smokers and be insured through Medicaid. In an adjusted analysis, deaf individuals were significantly more likely to use the ED (odds ratio [OR], 1.97; 95% confidence interval [CI], 1.11-3.51) over the prior 36 months. Deaf American Sign Language users appear to be at greater odds for elevated ED utilization when compared to the general hearing population. Efforts to further understand the drivers for increased ED utilization among deaf ASL users are much needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  20. Lexical Variation and Change in British Sign Language

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy

    2014-01-01

    This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region. PMID:24759673

  1. Prosodic Transfer in Learner and Contact Varieties: Speech Rhythm and Intonation of Buenos Aires Spanish and L2 Castilian Spanish Produced by Italian Native Speakers

    Science.gov (United States)

    Gabriel, Christoph; Kireva, Elena

    2014-01-01

    A remarkable example of Spanish-Italian contact is the Spanish variety spoken in Buenos Aires (Porteño), which is said to be prosodically "Italianized" due to migration-induced contact. The change in Porteño prosody has been interpreted as a result of transfer from the first language (L1) that occurred when Italian immigrants learned…

  2. Face Recognition Is Shaped by the Use of Sign Language

    Science.gov (United States)

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  3. Attention-getting skills of deaf children using American Sign Language in a preschool classroom.

    Science.gov (United States)

    Lieberman, Amy M

    2015-07-01

    Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.

  4. Cognitive status, lexical learning and memory in deaf adults using sign language

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2013-05-01

    Full Text Available Background and Aim : Learning and memory are two high level cognitive performances in human that hearing loss influences them. In our study, mini-mental state examination (MMSE and Ray auditory-verbal learning test (RAVLT was conducted to study cognitive stat us and lexical learning and memory in deaf adults using sign language. Methods: This cross-sectional comparative study was conducted on 30 available congenitally deaf adults using sign language in Persian and 46 normal adults aged 19 to 27 years for both sexes, with a minimum of diploma level of education. After mini-mental state examination, Rey auditory-verbal learning test was run through computers to evaluate lexical learning and memory with visual presentation. Results: Mean scores of mini-mental state examination and Rey auditory-verbal learning test in congenitally deaf adults were significantly lower than normal individuals in all scores (p=0.018 except in the two parts of the Rey test. Significant correlation was found between results of two tests just in the normal group (p=0.043. Gender had no effect on test results. Conclusion: Cognitive status and lexical memory and learning in congenitally deaf individuals is weaker than in normal subjects. It seems that using sign language as the main way of communication in deaf people causes poor lexical memory and learning.

  5. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    Science.gov (United States)

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de

    2017-08-10

    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  6. A Proposed Pedagogical Mobile Application for Learning Sign Language

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2013-01-01

    Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.

  7. Linguo-cognitive and pragmatic features of the prosodic organization of English parables

    Directory of Open Access Journals (Sweden)

    Musiienko Yulia

    2017-06-01

    Full Text Available This paper highlights the results of an investigation of the cognitive and pragmatic features of prosodic loading in English parables. The analysis of the prosodic organization of directive intentions in a parable was performed by applying systematization and classification of the pragmatic aspect, structural and semantic, functional and pragmatic specificity in order to examine the process of producing and understanding the texts of English parables.

  8. The Subsystem of Numerals in Catalan Sign Language: Description and Examples from a Psycholinguistic Study

    Science.gov (United States)

    Fuentes, Mariana; Tolchinsky, Liliana

    2004-01-01

    Linguistic descriptions of sign languages are important to the recognition of their linguistic status. These languages are an essential part of the cultural heritage of the communities that create and use them and vital in the education of deaf children. They are also the reference point in language acquisition studies. Ours is exploratory…

  9. The Influence of Prosodic Input in the Second Language Classroom: Does It Stimulate Child Acquisition of Word Order and Function Words?

    Science.gov (United States)

    Campfield, Dorota E.; Murphy, Victoria A.

    2017-01-01

    This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…

  10. Functional changes in people with different hearing status and experiences of using Chinese sign language: an fMRI study.

    Science.gov (United States)

    Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji

    2014-01-01

    The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc

  11. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    Science.gov (United States)

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  12. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  13. Intonational Division of a Speech Flow in the Kazakh Language

    Science.gov (United States)

    Bazarbayeva, Zeynep M.; Zhalalova, Akshay M.; Ormakhanova, Yenlik N.; Ospangaziyeva, Nazgul B.; Karbozova, Bulbul D.

    2016-01-01

    The purpose of this research is to analyze the speech intonation of the French, Kazakh, English and Russian languages. The study considers intonation component functions (of melodics, duration, and intensity) in poetry and language spoken. It is defined that a set of prosodic means are used in order to convey the intonational specifics of sounding…

  14. Prosodic Abilities in Spanish and English Children with Williams Syndrome: A Cross-Linguistic Study

    Science.gov (United States)

    Martinez-Castilla, Pastora; Stojanovik, Vesna; Setter, Jane; Sotillo, Maria

    2012-01-01

    The aim of this study was to compare the prosodic profiles of English- and Spanish-speaking children with Williams syndrome (WS), examining cross-linguistic differences. Two groups of children with WS, English and Spanish, of similar chronological and nonverbal mental age, were compared on performance in expressive and receptive prosodic tasks…

  15. A tour in sign language

    CERN Document Server

    François Briard

    2016-01-01

    In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration.   On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...

  16. The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners

    Science.gov (United States)

    Lu, Jenny; Jones, Anna; Morgan, Gary

    2016-01-01

    There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…

  17. A qualitative exploration of trial-related terminology in a study involving Deaf British Sign Language users.

    Science.gov (United States)

    Young, Alys; Oram, Rosemary; Dodds, Claire; Nassimi-Green, Catherine; Belk, Rachel; Rogers, Katherine; Davies, Linda; Lovell, Karina

    2016-04-27

    Internationally, few clinical trials have involved Deaf people who use a signed language and none have involved BSL (British Sign Language) users. Appropriate terminology in BSL for key concepts in clinical trials that are relevant to recruitment and participant information materials, to support informed consent, do not exist. Barriers to conceptual understanding of trial participation and sources of misunderstanding relevant to the Deaf community are undocumented. A qualitative, community participatory exploration of trial terminology including conceptual understanding of 'randomisation', 'trial', 'informed choice' and 'consent' was facilitated in BSL involving 19 participants in five focus groups. Data were video-recorded and analysed in source language (BSL) using a phenomenological approach. Six necessary conditions for developing trial information to support comprehension were identified. These included: developing appropriate expressions and terminology from a community basis, rather than testing out previously derived translations from a different language; paying attention to language-specific features which support best means of expression (in the case of BSL expectations of specificity, verb directionality, handshape); bilingual influences on comprehension; deliberate orientation of information to avoid misunderstanding not just to promote accessibility; sensitivity to barriers to discussion about intelligibility of information that are cultural and social in origin, rather than linguistic; the importance of using contemporary language-in-use, rather than jargon-free or plain language, to support meaningful understanding. The study reinforces the ethical imperative to ensure trial participants who are Deaf are provided with optimum resources to understand the implications of participation and to make an informed choice. Results are relevant to the development of trial information in other signed languages as well as in spoken/written languages when

  18. The neural substrates of impaired prosodic detection in schizophrenia and its sensorial antecedents.

    Science.gov (United States)

    Leitman, David I; Hoptman, Matthew J; Foxe, John J; Saccente, Erica; Wylie, Glenn R; Nierenberg, Jay; Jalbrzikowski, Maria; Lim, Kelvin O; Javitt, Daniel C

    2007-03-01

    Individuals with schizophrenia show severe deficits in their ability to decode emotions based upon vocal inflection (affective prosody). This study examined neural substrates of prosodic dysfunction in schizophrenia with voxelwise analysis of diffusion tensor magnetic resonance imaging (MRI). Affective prosodic performance was assessed in 19 patients with schizophrenia and 19 comparison subjects with the Voice Emotion Identification Task (VOICEID), along with measures of basic pitch perception and executive processing (Wisconsin Card Sorting Test). Diffusion tensor MRI fractional anisotropy valves were used for voxelwise correlation analyses. In a follow-up experiment, performance on a nonaffective prosodic perception task was assessed in an additional cohort of 24 patients and 17 comparison subjects. Patients showed significant deficits in VOICEID and Distorted Tunes Task performance. Impaired VOICEID performance correlated significantly with lower fractional anisotropy values within primary and secondary auditory pathways, orbitofrontal cortex, corpus callosum, and peri-amygdala white matter. Impaired Distorted Tunes Task performance also correlated with lower fractional anisotropy in auditory and amygdalar pathways but not prefrontal cortex. Wisconsin Card Sorting Test performance in schizophrenia correlated primarily with prefrontal fractional anisotropy. In the follow-up study, significant deficits were observed as well in nonaffective prosodic performance, along with significant intercorrelations among sensory, affective prosodic, and nonaffective measures. Schizophrenia is associated with both structural and functional disturbances at the level of primary auditory cortex. Such deficits contribute significantly to patients' inability to decode both emotional and semantic aspects of speech, highlighting the importance of sensorial abnormalities in social communicatory dysfunction in schizophrenia.

  19. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  20. Phonetics of intonation in South African Bantu languages

    CSIR Research Space (South Africa)

    Zerbian, S

    2008-01-01

    Full Text Available Much is already known about the prosodic systems of the indigenous South African languages from descriptions and analyses in the existing literature. All of the existing work has been carried out in the field of African studies or formal linguistics...

  1. Educational Resources and Implementation of a Greek Sign Language Synthesis Architecture

    Science.gov (United States)

    Karpouzis, K.; Caridakis, G.; Fotinea, S.-E.; Efthimiou, E.

    2007-01-01

    In this paper, we present how creation and dynamic synthesis of linguistic resources of Greek Sign Language (GSL) may serve to support development and provide content to an educational multitask platform for the teaching of GSL in early elementary school classes. The presented system utilizes standard virtual character (VC) animation technologies…

  2. Multimodal semantic quantity representations: further evidence from Korean Sign Language

    Directory of Open Access Journals (Sweden)

    Frank eDomahs

    2012-01-01

    Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.

  3. Thinking through ethics : the processes of ethical decision-making by novice and expert American sign language interpreters

    OpenAIRE

    Mendoza, Mary Elizabeth

    2010-01-01

    In the course of their work, sign language interpreters are faced with ethical dilemmas that require prioritizing competing moral beliefs and views on professional practice. There are several decision-making models, however, little research has been done on how sign language interpreters learn to identify and make ethical decisions. Through surveys and interviews on ethical decision-making, this study investigates how expert and novice interpreters discuss their ethical decision-making proces...

  4. Vocabulary Instruction through Books Read in American Sign Language for English-Language Learners with Hearing Loss

    Science.gov (United States)

    Cannon, Joanna E.; Fredrick, Laura D.; Easterbrooks, Susan R.

    2010-01-01

    Reading to children improves vocabulary acquisition through incidental exposure, and it is a best practice for parents and teachers of children who can hear. Children who are deaf or hard of hearing are at risk for not learning vocabulary as such. This article describes a procedure for using books read on DVD in American Sign Language with…

  5. Teaching sign language in gaucho schools for deaf people: a study of curricula

    Directory of Open Access Journals (Sweden)

    Carolina Hessel Silveira

    2013-06-01

    Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.

  6. First language acquisition differs from second language acquisition in prelingually deaf signers: evidence from sensitivity to grammaticality judgement in British Sign Language.

    Science.gov (United States)

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-07-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. A Case of Specific Language Impairment in a Deaf Signer of American Sign Language.

    Science.gov (United States)

    Quinto-Pozos, David; Singleton, Jenny L; Hauser, Peter C

    2017-04-01

    This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3.5-year period (15;10-19;6) during which we collected linguistic and neuropsychological data. Results revealed that this student has average intelligence, intact visual perceptual skills, visuospatial skills, and motor skills but demonstrates challenges with some memory and sequential processing tasks. Scores from ASL testing signaled language impairment and marked difficulty with fingerspelling. The student also had significant deficits in English vocabulary, spelling, reading comprehension, reading fluency, and writing. Accepted SLI diagnostic criteria exclude deaf individuals from an SLI diagnosis, but the authors propose modified criteria in this work. The results of this study have practical implications for professionals including school psychologists, speech language pathologists, and ASL specialists. The results also support the theoretical argument that SLI can be evident regardless of the modality in which it is communicated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Expansion of Prosodic Abilities at the Transition From Babble to Words: A Comparison Between Children With Cochlear Implants and Normally Hearing Children.

    Science.gov (United States)

    Pettinato, Michèle; Clerck, Ilke De; Verhoeven, Jo; Gillis, Steven

    This longitudinal study examined the effect of emerging vocabulary production on the ability to produce the phonetic cues to prosodic prominence in babbled and lexical disyllables of infants with cochlear implants (CI) and normally hearing (NH) infants. Current research on typical language acquisition emphasizes the importance of vocabulary development for phonological and phonetic acquisition. Children with CI experience significant difficulties with the perception and production of prosody, and the role of possible top-down effects is, therefore, particularly relevant for this population. Isolated disyllabic babble and first words were identified and segmented in longitudinal audio-video recordings and transcriptions for nine NH infants and nine infants with CI interacting with their parents. Monthly recordings were included from the onset of babbling until children had reached a cumulative vocabulary of 200 words. Three cues to prosodic prominence, fundamental frequency (f0), intensity, and duration, were measured in the vocalic portions of stand-alone disyllables. To represent the degree of prosodic differentiation between two syllables in an utterance, the raw values for intensity and duration were transformed to ratios, and for f0, a measure of the perceptual distance in semitones was derived. The degree of prosodic differentiation for disyllabic babble and words for each cue was compared between groups. In addition, group and individual tendencies on the types of stress patterns for babble and words were also examined. The CI group had overall smaller pitch and intensity distances than the NH group. For the NH group, words had greater pitch and intensity distances than babbled disyllables. Especially for pitch distance, this was accompanied by a shift toward a more clearly expressed stress pattern that reflected the influence of the ambient language. For the CI group, the same expansion in words did not take place for pitch. For intensity, the CI group gave

  9. Electrophysiology of Sentence Processing in Aphasia: Prosodic Cues and Thematic Fit

    Directory of Open Access Journals (Sweden)

    Shannon M. Sheppard

    2015-05-01

    * [ ] Indicates prosodic contour Methods: Twenty-four healthy college-age control participants (YNCs and ten adults with a Broca’s aphasia participated in this study. Each sentence was presented aurally to the participants over headphones. ERP Data Recording & Analysis. ERPs were recorded from 32-electrode sites across the scalp according to the 10-20 system. ERPs were averaged (100ms prestimulus baseline from artifact free trials time-locked to critical words (i.e., the point of disambiguation “pleased” in the prosodic comparison, and the NP “the song”/”the beer” in the semantic comparison. Mean amplitudes were calculated in two windows: 300-500ms for the N400 effects and 500-1000ms for the P600 effects. Results: The data from our YNCs revealed a biphasic N400-P600 complex in the prosody comparison (Figure 1A. We also found an N400 effect immediately at the NP in the incongruent relative to congruent thematic fit comparison. For the prosodic comparison in the PWA group, a delayed N400 effect was found one word downstream relative to the YNC data in the prosody comparison (Figure 1B. Additionally, an N400 effect was observed in the thematic fit comparison. Discussion: The results suggests that PWA possess a delayed sensitivity to prosodic cues, which then may affect their ability to recover from misanalysis from an incorrect parse. The results also indicate that PWA are sensitive to thematic fit information and have the capacity to process this information similarly to YNCs.

  10. Recognition of sign language with an inertial sensor-based data glove.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam

    2015-01-01

    Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.

  11. Constructed Action, the Clause and the Nature of Syntax in Finnish Sign Language

    Directory of Open Access Journals (Sweden)

    Jantunen Tommi

    2017-01-01

    Full Text Available This paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL. Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.

  12. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad

    2015-12-01

    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  13. The neural correlates of highly iconic structures and topographic discourse in French Sign Language as observed in six hearing native signers.

    Science.gov (United States)

    Courtin, C; Hervé, P-Y; Petit, L; Zago, L; Vigneau, M; Beaucousin, V; Jobard, G; Mazoyer, B; Mellet, E; Tzourio-Mazoyer, N

    2010-09-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers. 2010 Elsevier Inc. All rights reserved.

  14. Prosodic Markers of Saliency in Humorous Narratives

    Science.gov (United States)

    Pickering, Lucy; Corduas, Marcella; Eisterhold, Jodi; Seifried, Brenna; Eggleston, Alyson; Attardo, Salvatore

    2009-01-01

    Much of what we think we know about the performance of humor relies on our intuitions about prosody (e.g., "it's all about timing"); however, this has never been empirically tested. Thus, the central question addressed in this article is whether speakers mark punch lines in jokes prosodically and, if so, how. To answer this question,…

  15. Metáforas en Lengua de Señas Chilena Metaphors in Chilean Sign Language

    Directory of Open Access Journals (Sweden)

    Carolina Becerra

    2008-05-01

    Full Text Available Este estudio describe las características del lenguaje metafórico de personas sordas chilenas y su impacto en la comprensión lingüística. La relevancia de esta pregunta radica en la escasez de investigaciones realizadas, particularmente a nivel nacional. Se desarrolló un estudio cualitativo en base a análisis de videos de sujetos sordos en habla espontánea. Se confeccionó una lista de metáforas conceptuales y no conceptuales en Lengua de Señas Chilena. Posteriormente se evaluó su comprensión en un grupo de sujetos sordos, educados con modalidad comunicativa de lengua de señas. Los resultados obtenidos permiten observar la existencia de metáforas propias de la cultura sorda. Ellas serían coherentes con las particulares experiencias de los sujetos sordos y no necesariamente concuerdan con el lenguaje oral.The present study examined the characteristics of Chilean deaf people's metaphoric language and its relevance in linguistic comprehension. This key question is based in the scarcity of studies conducted in Chile. A qualitative study was developed, on the basis of analysis of videos of Chilean deaf people spontaneous sign language. A list of conceptual and no conceptual metaphors in Chilean sign language was developed. The comprehension of these metaphors was evaluated in a group of deaf subjets, educated using sign language communication. The results identify the existence of metaphors of the deaf culture. These methaphors would be coherent with the particular experiences of deaf subjets and do not necessarily agree with spoken language.

  16. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  17. The development and psychometric properties of the American sign language proficiency assessment (ASL-PA).

    Science.gov (United States)

    Maller, S; Singleton, J; Supalla, S; Wix, T

    1999-01-01

    We describe the procedures for constructing an instrument designed to evaluate children's proficiency in American Sign Language (ASL). The American Sign Language Proficiency Assessment (ASL-PA) is a much-needed tool that potentially could be used by researchers, language specialists, and qualified school personnel. A half-hour ASL sample is collected on video from a target child (between ages 6 and 12) across three separate discourse settings and is later analyzed and scored by an assessor who is highly proficient in ASL. After the child's language sample is scored, he or she can be assigned an ASL proficiency rating of Level 1, 2, or 3. At this phase in its development, substantial evidence of reliability and validity has been obtained for the ASL-PA using a sample of 80 profoundly deaf children (ages 6-12) of varying ASL skill levels. The article first explains the item development and administration of the ASL-PA instrument, then describes the empirical item analysis, standard setting procedures, and evidence of reliability and validity. The ASL-PA is a promising instrument for assessing elementary school-age children's ASL proficiency. Plans for further development are also discussed.

  18. Music and Sign Language to Promote Infant and Toddler Communication and Enhance Parent-Child Interaction

    Science.gov (United States)

    Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne

    2014-01-01

    The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…

  19. Benefits of augmentative signs in word learning: Evidence from children who are deaf/hard of hearing and children with specific language impairment.

    Science.gov (United States)

    van Berkel-van Hoof, Lian; Hermans, Daan; Knoors, Harry; Verhoeven, Ludo

    2016-12-01

    Augmentative signs may facilitate word learning in children with vocabulary difficulties, for example, children who are Deaf/Hard of Hearing (DHH) and children with Specific Language Impairment (SLI). Despite the fact that augmentative signs may aid second language learning in populations with a typical language development, empirical evidence in favor of this claim is lacking. We aim to investigate whether augmentative signs facilitate word learning for DHH children, children with SLI, and typically developing (TD) children. Whereas previous studies taught children new labels for familiar objects, the present study taught new labels for new objects. In our word learning experiment children were presented with pictures of imaginary creatures and pseudo words. Half of the words were accompanied by an augmentative pseudo sign. The children were tested for their receptive word knowledge. The DHH children benefitted significantly from augmentative signs, but the children with SLI and TD age-matched peers did not score significantly different on words from either the sign or no-sign condition. These results suggest that using Sign-Supported speech in classrooms of bimodal bilingual DHH children may support their spoken language development. The difference between earlier research findings and the present results may be caused by a difference in methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Cognitive Metaphors Used in Colombian Sign Language in Five Autobiographical Stories and the Image Schemata They Are Related to

    Directory of Open Access Journals (Sweden)

    Yenny Rodríguez Hernández

    2016-06-01

    Full Text Available This paper reports the results of an exploratory study whose purpose was to identify and characterize the metaphors in a sample of five videos in Colombian sign language (in Spanish, lsc.The data were analyzed using theoretical contributions from Lakoff and Johnson’s theories (1980 about cognitive metaphors and image schemata, and from Wilcox (2000 and Taub (2001 on double mapping in sign language. The results show a frequency analysis of image schemata and the metaphors present into metaphorical expressions in five autobiographical narratives by five congenital deaf adults. The study concludes that sign language has cognitive metaphors that let deaf people map from a concrete domain to an abstract one in order to build concepts.

  1. American Sign Language

    Science.gov (United States)

    ... combined with facial expressions and postures of the body. It is the primary language of many North Americans who are deaf and ... their eyebrows, widening their eyes, and tilting their bodies forward. Just as with other languages, specific ways of expressing ideas in ASL vary ...

  2. Explaining Phonology and Reading in Adult Learners: Introducing Prosodic Awareness and Executive Functions to Reading Ability

    Science.gov (United States)

    Chan, Jessica S.; Wade-Woolley, Lesly

    2018-01-01

    Background: This study was designed to extend our understanding of phonology and reading to include suprasegmental awareness using measures of prosodic awareness, which are complex tasks that tap into the rhythmic aspects of phonology. By requiring participants to access, reflect on and manipulate word stress, the prosodic awareness measures used…

  3. DAISY, the best way to author sign language publications

    CSIR Research Space (South Africa)

    Olivrin, G

    2009-09-01

    Full Text Available /detail.shtml?i=41 Eberius, Wolfram (2008): Multimodale Erwiterung Und Distribution Von Digital Talking Books. Germany: Technische universität Dresden. Fédération Internationale de Football Association (2008): Laws of the Game 2008/2009. Switzerland: FIFA... are further discussed that will influence the design of future DAISY standards. 2.1 Creation of Sign Language Content To create a full-text/full-audio and full-text/full-video DAISY test-book, the original content of “Laws of the Game 2008/2009” (FIFA...

  4. Functional connectivity in task-negative network of the Deaf: effects of sign language experience

    Directory of Open Access Journals (Sweden)

    Evie Malaia

    2014-06-01

    Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

  5. A Kinematic Study of Prosodic Structure in Articulatory and Manual Gestures: Results from a Novel Method of Data Collection

    Directory of Open Access Journals (Sweden)

    Jelena Krivokapić

    2017-03-01

    Full Text Available The primary goal of this work is to examine prosodic structure as expressed concurrently through articulatory and manual gestures. Specifically, we investigated the effects of phrase-level prominence (Experiment 1 and of prosodic boundaries (Experiments 2 and 3 on the kinematic properties of oral constriction and manual gestures. The hypothesis guiding this work is that prosodic structure will be similarly expressed in both modalities. To test this, we have developed a novel method of data collection that simultaneously records speech audio, vocal tract gestures (using electromagnetic articulometry and manual gestures (using motion capture. This method allows us, for the first time, to investigate kinematic properties of body movement and vocal tract gestures simultaneously, which in turn allows us to examine the relationship between speech and body gestures with great precision. A second goal of the paper is thus to establish the validity of this method. Results from two speakers show that manual and oral gestures lengthen under prominence and at prosodic boundaries, indicating that the effects of prosodic structure extend beyond the vocal tract to include body movement.1

  6. Functional and anatomical correlates of word-, sentence-, and discourse-level integration in sign language

    Directory of Open Access Journals (Sweden)

    Tomoo eInubushi

    2013-10-01

    Full Text Available In both vocal and sign languages, we can distinguish word-, sentence-, and discourse-level integration in terms of hierarchical processes, which integrate various elements into another higher level of constructs. In the present study, we used magnetic resonance imaging and voxel-based morphometry to test three language tasks in Japanese Sign Language (JSL: word-level (Word, sentence-level (Sent, and discourse-level (Disc decision tasks. We analyzed cortical activity and gray matter volumes of Deaf signers, and clarified three major points. First, we found that the activated regions in the frontal language areas gradually expanded in the dorso-ventral axis, corresponding to a difference in linguistic units for the three tasks. Moreover, the activations in each region of the frontal language areas were incrementally modulated with the level of linguistic integration. These dual mechanisms of the frontal language areas may reflect a basic organization principle of hierarchically integrating linguistic information. Secondly, activations in the lateral premotor cortex and inferior frontal gyrus were left-lateralized. Direct comparisons among the language tasks exhibited more focal activation in these regions, suggesting their functional localization. Thirdly, we found significantly positive correlations between individual task performances and gray matter volumes in localized regions, even when the ages of acquisition of JSL and Japanese were factored out. More specifically, correlations with the performances of the Word and Sent tasks were found in the left precentral/postcentral gyrus and insula, respectively, while correlations with those of the Disc task were found in the left ventral inferior frontal gyrus and precuneus. The unification of functional and anatomical studies would thus be fruitful for understanding human language systems from the aspects of both universality and individuality.

  7. Variation in prosodic planning among individuals and across languages

    OpenAIRE

    Swets, Benjamin; Petrone, Caterina; Fuchs, Susanne; Krivokapić, Jelena

    2016-01-01

    International audience; Previous research (Swets et al., 2007) found that working memory (WM) was associated with the manner in which silent readers in multiple languages package linguistic material together. Specifically, those with high WM were more likely to create larger linguistic packages than those with low WM, which in turn influenced the manner in which they interpreted syntactic ambiguities. One possibility that gained support in subsequent research on language production (Petrone e...

  8. Acquisition of Prosodic Focus Marking by English, French, and German Three-, Four-, Five- and Six-Year-Olds

    Science.gov (United States)

    Szendroi, Kriszta; Bernard, Carline; Berger, Frauke; Gervain, Judit; Hohle, Barbara

    2018-01-01

    Previous research on young children's knowledge of prosodic focus marking has revealed an apparent paradox, with comprehension appearing to lag behind production. Comprehension of prosodic focus is difficult to study experimentally due to its subtle and ambiguous contribution to pragmatic meaning. We designed a novel comprehension task, which…

  9. Criteria for Labelling Prosodic Aspects of English Speech.

    Science.gov (United States)

    Bagshaw, Paul C.; Williams, Briony J.

    A study reports a set of labelling criteria which have been developed to label prosodic events in clear, continuous speech, and proposes a scheme whereby this information can be transcribed in a machine readable format. A prosody in a syllabic domain which is synchronized with a phonemic segmentation was annotated. A procedural definition of…

  10. Longitudinal Receptive American Sign Language Skills across a Diverse Deaf Student Body

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2016-01-01

    This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…

  11. Role of sign language in intellectual and social development of deaf children: Review of foreign publications

    Directory of Open Access Journals (Sweden)

    Khokhlova A. Yu.

    2014-12-01

    Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.

  12. The duplication of the number of hands in Sign Language, and its semantic effects

    Directory of Open Access Journals (Sweden)

    André Nogueira Xavier

    2015-07-01

    Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.

  13. Implicit co-activation of American Sign Language in deaf readers: An ERP study.

    Science.gov (United States)

    Meade, Gabriela; Midgley, Katherine J; Sevcikova Sehyr, Zed; Holcomb, Phillip J; Emmorey, Karen

    2017-07-01

    In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Information and Signs: The Language of Images

    Directory of Open Access Journals (Sweden)

    Inna Semetsky

    2010-03-01

    Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.

  15. Deaf leaders’ strategies for working with signed language interpreters: An examination across seven countries.

    NARCIS (Netherlands)

    Haug, T.; Bontempo, K.; Leeson, L.; Napier, J.; Nicodemus, B.; Van den Bogaerde, B.; Vermeerbergen, M.

    In this paper, we report interview data from 14 Deaf leaders across seven countries (Australia, Belgium, Ireland, the Netherlands, Switzerland, the United Kingdom, and the United States) regarding their perspectives on signed language interpreters. Using a semistructured survey questionnaire, seven

  16. Creating a Digital Jamaican Sign Language Dictionary: A R2D2 Approach

    Science.gov (United States)

    MacKinnon, Gregory; Soutar, Iris

    2015-01-01

    The Jamaican Association for the Deaf, in their responsibilities to oversee education for individuals who are deaf in Jamaica, has demonstrated an urgent need for a dictionary that assists students, educators, and parents with the practical use of "Jamaican Sign Language." While paper versions of a preliminary resource have been explored…

  17. Stress 'deafness' in a language with fixed word stress: an ERP study on Polish

    Directory of Open Access Journals (Sweden)

    Ulrike eDomahs

    2012-11-01

    Full Text Available The aim of the present contribution was to examine the factors influencing the prosodic processing in a language with predictable word stress. For Polish, a language with fixed penultimate stress but several well-defined exceptions, difficulties in the processing and representation of prosodic information have been reported (e.g., Peperkamp & Dupoux, 2002. The present study utilized event-related potentials (ERPs to investigate the factors influencing prosodic processing in Polish. These factors are i the predictability of stress and ii the prosodic structure in terms of metrical feet. Polish native speakers were presented with correctly and incorrectly stressed Polish words and instructed to judge the correctness of the perceived stress patterns. For each stress violation an early negativity was found which was interpreted as reflection of an error-detection mechanism, and in addition exceptional stress patterns (= antepenultimate stress and post-lexical (= initial stress evoked a task-related positivity effect (P300 whose amplitude and latency is correlated with the degree of anomaly and deviation from an expectation. Violations involving the default (= penultimate stress in contrast did not produce such an effect. This asymmetrical result is interpreted to reflect that Polish native speakers are less sensitive to the default pattern than to the exceptional or post-lexical patterns. Behavioral results are orthogonal to the electrophysiological results showing that Polish speakers had difficulties to reject any kind of stress violation. Thus, on a meta-linguistic level Polish speakers appeared to be stress-‘deaf’ for any kind of stress manipulation, whereas the neural reactions differentiate between the default and lexicalized patterns.

  18. Conversational quality is affected by and reflected in prosodic entrainment

    DEFF Research Database (Denmark)

    Michalsky, Jan; Niebuhr, Oliver; Schoormann, Heike

    2018-01-01

    Prosodic entrainment is connected to various forms of communicative success. One possibility to assess successful communication in non-task-oriented everyday conversations is through the participants’ perception of conversational quality. In this study we investigate whether a speaker’s degree of...

  19. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  20. Analysis of engagement behavior in children during dyadic interactions using prosodic cues.

    Science.gov (United States)

    Gupta, Rahul; Bone, Daniel; Lee, Sungbok; Narayanan, Shrikanth

    2016-05-01

    Child engagement is defined as the interaction of a child with his/her environment in a contextually appropriate manner. Engagement behavior in children is linked to socio-emotional and cognitive state assessment with enhanced engagement identified with improved skills. A vast majority of studies however rely solely, and often implicitly, on subjective perceptual measures of engagement. Access to automatic quantification could assist researchers/clinicians to objectively interpret engagement with respect to a target behavior or condition, and furthermore inform mechanisms for improving engagement in various settings. In this paper, we present an engagement prediction system based exclusively on vocal cues observed during structured interaction between a child and a psychologist involving several tasks. Specifically, we derive prosodic cues that capture engagement levels across the various tasks. Our experiments suggest that a child's engagement is reflected not only in the vocalizations, but also in the speech of the interacting psychologist. Moreover, we show that prosodic cues are informative of the engagement phenomena not only as characterized over the entire task (i.e., global cues), but also in short term patterns (i.e., local cues). We perform a classification experiment assigning the engagement of a child into three discrete levels achieving an unweighted average recall of 55.8% (chance is 33.3%). While the systems using global cues and local level cues are each statistically significant in predicting engagement, we obtain the best results after fusing these two components. We perform further analysis of the cues at local and global levels to achieve insights linking specific prosodic patterns to the engagement phenomenon. We observe that while the performance of our model varies with task setting and interacting psychologist, there exist universal prosodic patterns reflective of engagement.

  1. The Nature of Hemispheric Specialization for Linguistic and Emotional Prosodic Perception: A Meta-Analysis of the Lesion Literature

    Science.gov (United States)

    Witteman, Jurriaan; van IJzendoorn, Marinus H.; van de Velde, Daan; van Heuven, Vincent J. J. P.; Schiller, Niels O.

    2011-01-01

    It is unclear whether there is hemispheric specialization for prosodic perception and, if so, what the nature of this hemispheric asymmetry is. Using the lesion-approach, many studies have attempted to test whether there is hemispheric specialization for emotional and linguistic prosodic perception by examining the impact of left vs. right…

  2. Access to New Zealand Sign Language interpreters and quality of life for the deaf: a pilot study.

    Science.gov (United States)

    Henning, Marcus A; Krägeloh, Christian U; Sameshima, Shizue; Shepherd, Daniel; Shepherd, Gregory; Billington, Rex

    2011-01-01

    This paper aims to: (1) explore usage and accessibility of sign language interpreters, (2) appraise the levels of quality of life (QOL) of deaf adults residing in New Zealand, and (3) consider the impact of access to and usage of sign language interpreters on QOL. Sixty-eight deaf adults living in New Zealand participated in this study. Two questionnaires were employed: a 12-item instrument about access and use of New Zealand sign language interpreters and the abbreviated version of the World Health Organization Quality of Life questionnaire (WHOQOL-BREF). The results showed that 39% of this sample felt that they were unable to adequately access interpreting services. Moreover, this group scored significantly lower than a comparable hearing sample on all four WHOQOL-BREF domains. Finally, the findings revealed that access to good quality interpreters were associated with access to health services, transport issues, engagement in leisure activities, gaining more information, mobility and living in a healthy environment. These findings have consequences for policy makers and agencies interested in ensuring that there is an equitable distribution of essential services for all groups within New Zealand which inevitably has an impact on the health of the individual.

  3. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    Science.gov (United States)

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  4. Early vocabulary development in deaf native signers: a British Sign Language adaptation of the communicative development inventories.

    Science.gov (United States)

    Woolfe, Tyron; Herman, Rosalind; Roy, Penny; Woll, Bencie

    2010-03-01

    There is a dearth of assessments of sign language development in young deaf children. This study gathered age-related scores from a sample of deaf native signing children using an adapted version of the MacArthur-Bates CDI (Fenson et al., 1994). Parental reports on children's receptive and expressive signing were collected longitudinally on 29 deaf native British Sign Language (BSL) users, aged 8-36 months, yielding 146 datasets. A smooth upward growth curve was obtained for early vocabulary development and percentile scores were derived. In the main, receptive scores were in advance of expressive scores. No gender bias was observed. Correlational analysis identified factors associated with vocabulary development, including parental education and mothers' training in BSL. Individual children's profiles showed a range of development and some evidence of a growth spurt. Clinical and research issues relating to the measure are discussed. The study has developed a valid, reliable measure of vocabulary development in BSL. Further research is needed to investigate the relationship between vocabulary acquisition in native and non-native signers.

  5. Prosodic characteristics of read speech before and after treadmill running

    NARCIS (Netherlands)

    Trouvain, Jürgen; Truong, Khiet Phuong

    Physical activity leads to a respiratory behaviour that is very different to a resting state and that influences speech production. How speech parameters are exactly affected by physical activity remains largely unknown. Hence, we investigated how several prosodic parameters change under influence

  6. Prosodic Function Row in Persian Poetry

    Directory of Open Access Journals (Sweden)

    Majid Mansouri

    2017-04-01

    The main reason for the emergence of rows in Persian poetry is its prosodic function that has already been paid less. I just found something in the book Ghosn al-ban which the author had some similar view to the row. In this study, we made our attempt to show another reason for the entry and spread of the row in Persian poetry by means of a new approach. It should also be noted that in these lines to avoid as much as possible the repetitive and stereotyped points regarding the row.

  7. The Phonological-Distributional Coherence Hypothesis: Cross-Linguistic Evidence in Language Acquisition

    Science.gov (United States)

    Monaghan, Padraic; Christiansen, Morten H.; Chater, Nick

    2007-01-01

    Several phonological and prosodic properties of words have been shown to relate to differences between grammatical categories. Distributional information about grammatical categories is also a rich source in the child's language environment. In this paper we hypothesise that such cues operate in tandem for developing the child's knowledge about…

  8. Identification of four class emotion from Indonesian spoken language using acoustic and lexical features

    Science.gov (United States)

    Kasyidi, Fatan; Puji Lestari, Dessi

    2018-03-01

    One of the important aspects in human to human communication is to understand emotion of each party. Recently, interactions between human and computer continues to develop, especially affective interaction where emotion recognition is one of its important components. This paper presents our extended works on emotion recognition of Indonesian spoken language to identify four main class of emotions: Happy, Sad, Angry, and Contentment using combination of acoustic/prosodic features and lexical features. We construct emotion speech corpus from Indonesia television talk show where the situations are as close as possible to the natural situation. After constructing the emotion speech corpus, the acoustic/prosodic and lexical features are extracted to train the emotion model. We employ some machine learning algorithms such as Support Vector Machine (SVM), Naive Bayes, and Random Forest to get the best model. The experiment result of testing data shows that the best model has an F-measure score of 0.447 by using only the acoustic/prosodic feature and F-measure score of 0.488 by using both acoustic/prosodic and lexical features to recognize four class emotion using the SVM RBF Kernel.

  9. Sign Language Interpreting in Theatre: Using the Human Body to Create Pictures of the Human Soul

    Directory of Open Access Journals (Sweden)

    Michael Richardson

    2017-06-01

    Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium

  10. Usability of American Sign Language Videos for Presenting Mathematics Assessment Content.

    Science.gov (United States)

    Hansen, Eric G; Loew, Ruth C; Laitusis, Cara C; Kushalnagar, Poorna; Pagliaro, Claudia M; Kurz, Christopher

    2018-04-12

    There is considerable interest in determining whether high-quality American Sign Language videos can be used as an accommodation in tests of mathematics at both K-12 and postsecondary levels; and in learning more about the usability (e.g., comprehensibility) of ASL videos with two different types of signers - avatar (animated figure) and human. The researchers describe the results of administering each of nine pre-college mathematics items in both avatar and human versions to each of 31 Deaf participants with high school and post-high school backgrounds. This study differed from earlier studies by obliging the participants to rely on the ASL videos to answer the items. While participants preferred the human version over the avatar version (apparently due largely to the better expressiveness and fluency of the human), there was no discernible relationship between mathematics performance and signed version.

  11. [Instruments in Brazilian Sign Language for assessing the quality of life of the deaf population].

    Science.gov (United States)

    Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida

    2013-06-01

    To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.

  12. Sign Language Translator Application Using OpenCV

    Science.gov (United States)

    Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.

    2018-03-01

    This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.

  13. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  14. Suspending the next turn as a form of repair initiation: evidence from Argentine Sign Language

    Directory of Open Access Journals (Sweden)

    Elizabeth eManrique

    2015-09-01

    Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.

  15. Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Children with good phonological awareness (PA) are often good word readers. Here, we asked whether Swedish deaf and hard-of-hearing (DHH) children who are more aware of the phonology of Swedish Sign Language, a language with no orthography, are better at reading words in Swedish. We developed the Cross-modal Phonological Awareness Test (C-PhAT) that can be used to assess PA in both Swedish Sign Language (C-PhAT-SSL) and Swedish (C-PhAT-Swed), and investigated how C-PhAT performance was related to word reading as well as linguistic and cognitive skills. We validated C-PhAT-Swed and administered C-PhAT-Swed and C-PhAT-SSL to DHH children who attended Swedish deaf schools with a bilingual curriculum and were at an early stage of reading. C-PhAT-SSL correlated significantly with word reading for DHH children. They performed poorly on C-PhAT-Swed and their scores did not correlate significantly either with C-PhAT-SSL or word reading, although they did correlate significantly with cognitive measures. These results provide preliminary evidence that DHH children with good sign language PA are better at reading words and show that measures of spoken language PA in DHH children may be confounded by individual differences in cognitive skills. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Znaky pro barvy v českém znakovém jazyce a jejich etymologie / Colour Terms in the Czech Sign Language and Their Etymology

    Directory of Open Access Journals (Sweden)

    Lenka Okrouhlíková

    2016-06-01

    Full Text Available The text deals with the signs in the Czech Sign Language for the basic colours — white, black, red, green, yellow, blue, brown and grey in the diachronic point of view. On the basis of historical written description of these signs from 1834–1907 the motivation of the signs is being analysed (the signs were derived from the typical object of the particular colour as well as the slow lexicalization and form (especially the components of the signs — the place of articulation, handshape and movement. At the same time, the historical signs are compared to the current signs and the text provides analysis of the trends in changes of phonological/morphological structures of the signs (place of articulation changes — moving down from the center to the periphery of the face, shortening of the movement, changing the shape of the hand etc.. In addition the text examines the possible relationship of these signs with the signs for colours in the Austrian, German and French Sign Language (the languages that had been used in deaf education at the end of the 18th and 19th centuries according to preserved records. Concerning the historical signs their motivation and form were compared, along with the detail look at the contemporary signs. This is the first look at the Czech Sign Language from the etymological point of view at all.

  17. Phrase-Final Words in Greek Storytelling Speech: A Study on the Effect of a Culturally-Specific Prosodic Feature on Short-Term Memory.

    Science.gov (United States)

    Loutrari, Ariadne; Tselekidou, Freideriki; Proios, Hariklia

    2018-02-27

    Prosodic patterns of speech appear to make a critical contribution to memory-related processing. We considered the case of a previously unexplored prosodic feature of Greek storytelling and its effect on free recall in thirty typically developing children between the ages of 10 and 12 years, using short ecologically valid auditory stimuli. The combination of a falling pitch contour and, more notably, extensive final-syllable vowel lengthening, which gives rise to the prosodic feature in question, led to statistically significantly higher performance in comparison to neutral phrase-final prosody. Number of syllables in target words did not reveal substantial difference in performance. The current study presents a previously undocumented culturally-specific prosodic pattern and its effect on short-term memory.

  18. The English-Language and Reading Achievement of a Cohort of Deaf Students Speaking and Signing Standard English: A Preliminary Study.

    Science.gov (United States)

    Nielsen, Diane Corcoran; Luetke, Barbara; McLean, Meigan; Stryker, Deborah

    2016-01-01

    Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies. A developmental trend (and no plateau) in language and reading achievement was detected; most participants demonstrated average or above-average English. Morphological awareness was prerequisite to high test scores; speech was not significantly correlated with achievement; language proficiency, measured by the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), predicted reading achievement.

  19. Comparing the Picture Exchange Communication System and Sign Language Training for Children with Autism

    Science.gov (United States)

    Tincani, Matt

    2004-01-01

    This study compared the effects of Picture Exchange Communication System (PECS) and sign language training on the acquisition of mands (requests for preferred items) of students with autism. The study also examined the differential effects of each modality on students' acquisition of vocal behavior. Participants were two elementary school students…

  20. Where to Look for American Sign Language (ASL) Sublexical Structure in the Visual World: Reply to Salverda (2016)

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2016-01-01

    In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…

  1. Processing of prosodic changes in natural speech stimuli in school-age children.

    Science.gov (United States)

    Lindström, R; Lepistö, T; Makkonen, T; Kujala, T

    2012-12-01

    Speech prosody conveys information about important aspects of communication: the meaning of the sentence and the emotional state or intention of the speaker. The present study addressed processing of emotional prosodic changes in natural speech stimuli in school-age children (mean age 10 years) by recording the electroencephalogram, facial electromyography, and behavioral responses. The stimulus was a semantically neutral Finnish word uttered with four different emotional connotations: neutral, commanding, sad, and scornful. In the behavioral sound-discrimination task the reaction times were fastest for the commanding stimulus and longest for the scornful stimulus, and faster for the neutral than for the sad stimulus. EEG and EMG responses were measured during non-attentive oddball paradigm. Prosodic changes elicited a negative-going, fronto-centrally distributed neural response peaking at about 500 ms from the onset of the stimulus, followed by a fronto-central positive deflection, peaking at about 740 ms. For the commanding stimulus also a rapid negative deflection peaking at about 290 ms from stimulus onset was elicited. No reliable stimulus type specific rapid facial reactions were found. The results show that prosodic changes in natural speech stimuli activate pre-attentive neural change-detection mechanisms in school-age children. However, the results do not support the suggestion of automaticity of emotion specific facial muscle responses to non-attended emotional speech stimuli in children. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Sign Language as Medium of Instruction in Botswana Primary Schools: Voices from the Field

    Science.gov (United States)

    Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly

    2015-01-01

    This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…

  3. Does incongruence of lexicosemantic and prosodic information cause discernible cognitive conflict?

    Science.gov (United States)

    Mitchell, Rachel L C

    2006-12-01

    We are often required to interpret discordant emotional signals. Whereas equivalent cognitive paradigms cause noticeable conflict via their behavioral and psychophysiological effects, the same may not necessarily be true for discordant emotions. Skin conductance responses (SCRs) and heart rates (HRs) were measured during a classic Stroop task and one in which the emotions conveyed by lexicosemantic content and prosody were congruent or incongruent. The participants' task was to identify the emotion conveyed by lexicosemantic content or prosody. No relationship was observed between HR and congruence. SCR was higher during incongruent than during congruent conditions of the experimental task (as well as in the classic Stroop task), but no difference in SCR was observed in a comparison between congruence effects during lexicosemantic emotion identification and those during prosodic emotion identification. It is concluded that incongruence between lexicosemantic and prosodic emotion does cause notable cognitive conflict. Functional neuroanatomic implications are discussed.

  4. COMPARATIVE ANALYSIS OF THE STRUCTURE OF THE AMERICAN AND MACEDONIAN SIGN LANGUAGE

    OpenAIRE

    Aleksandra KAROVSKA RISTOVSKA

    2014-01-01

    Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural ...

  5. Concise Lexicon for Sign Linguistics

    NARCIS (Netherlands)

    dr. Jan Nijen Twilhaar; Dr. Beppie van den Bogaerde

    2016-01-01

    This extensive, well-researched and clearly formatted lexicon of a wide variety of linguistic terms is a long overdue. It is an extremely welcome addition to the bookshelves of sign language teachers, interpreters, linguists, learners and other sign language users, and of course of the Deaf

  6. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa.

    Directory of Open Access Journals (Sweden)

    Tryphine Zulu

    Full Text Available The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI service in health-care.The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs.Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013.In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the

  7. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa.

    Science.gov (United States)

    Zulu, Tryphine; Heap, Marion; Sinanovic, Edina

    2017-01-01

    The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population

  8. Designing an American Sign Language Avatar for Learning Computer Science Concepts for Deaf or Hard-of-Hearing Students and Deaf Interpreters

    Science.gov (United States)

    Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia

    2013-01-01

    The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…

  9. Electrophysiology of prosodic and lexical-semantic processing during sentence comprehension in aphasia.

    Science.gov (United States)

    Sheppard, Shannon M; Love, Tracy; Midgley, Katherine J; Holcomb, Phillip J; Shapiro, Lewis P

    2017-12-01

    Event-related potentials (ERPs) were used to examine how individuals with aphasia and a group of age-matched controls use prosody and themattic fit information in sentences containing temporary syntactic ambiguities. Two groups of individuals with aphasia were investigated; those demonstrating relatively good sentence comprehension whose primary language difficulty is anomia (Individuals with Anomic Aphasia (IWAA)), and those who demonstrate impaired sentence comprehension whose primary diagnosis is Broca's aphasia (Individuals with Broca's Aphasia (IWBA)). The stimuli had early closure syntactic structure and contained a temporary early closure (correct)/late closure (incorrect) syntactic ambiguity. The prosody was manipulated to either be congruent or incongruent, and the temporarily ambiguous NP was also manipulated to either be a plausible or an implausible continuation for the subordinate verb (e.g., "While the band played the song/the beer pleased all the customers."). It was hypothesized that an implausible NP in sentences with incongruent prosody may provide the parser with a plausibility cue that could be used to predict syntactic structure. The results revealed that incongruent prosody paired with a plausibility cue resulted in an N400-P600 complex at the implausible NP (the beer) in both the controls and the IWAAs, yet incongruent prosody without a plausibility cue resulted in an N400-P600 at the critical verb (pleased) only in healthy controls. IWBAs did not show evidence of N400 or P600 effects at the ambiguous NP or critical verb, although they did show evidence of a delayed N400 effect at the sentence-final word in sentences with incongruent prosody. These results suggest that IWAAs have difficulty integrating prosodic cues with underlying syntactic structure when lexical-semantic information is not available to aid their parse. IWBAs have difficulty integrating both prosodic and lexical-semantic cues with syntactic structure, likely due to a

  10. Rhythm in language acquisition.

    Science.gov (United States)

    Langus, Alan; Mehler, Jacques; Nespor, Marina

    2017-10-01

    Spoken language is governed by rhythm. Linguistic rhythm is hierarchical and the rhythmic hierarchy partially mimics the prosodic as well as the morpho-syntactic hierarchy of spoken language. It can thus provide learners with cues about the structure of the language they are acquiring. We identify three universal levels of linguistic rhythm - the segmental level, the level of the metrical feet and the phonological phrase level - and discuss why primary lexical stress is not rhythmic. We survey experimental evidence on rhythm perception in young infants and native speakers of various languages to determine the properties of linguistic rhythm that are present at birth, those that mature during the first year of life and those that are shaped by the linguistic environment of language learners. We conclude with a discussion of the major gaps in current knowledge on linguistic rhythm and highlight areas of interest for future research that are most likely to yield significant insights into the nature, the perception, and the usefulness of linguistic rhythm. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Toward the Ideal Signing Avatar

    Directory of Open Access Journals (Sweden)

    Nicoletta Adamo-Villani

    2016-06-01

    Full Text Available The paper discusses ongoing research on the effects of a signing avatar's modeling/rendering features on the perception of sign language animation. It reports a recent study that aimed to determine whether a character's visual style has an effect on how signing animated characters are perceived by viewers. The stimuli of the study were two polygonal characters presenting two different visual styles: stylized and realistic. Each character signed four sentences. Forty-seven participants with experience in American Sign Language (ASL viewed the animated signing clips in random order via web survey. They (1 identified the signed sentences (if recognizable, (2 rated their legibility, and (3 rated the appeal of the signing avatar. Findings show that while character's visual style does not have an effect on subjects' perceived legibility of the signs and sign recognition, it has an effect on subjects' interest in the character. The stylized signing avatar was perceived as more appealing than the realistic one.

  12. Application of Demand-Control Theory to Sign Language Interpreting: Implications for Stress and Interpreter Training.

    Science.gov (United States)

    Dean, Robyn K.; Pollard, Robert Q., Jr.

    2001-01-01

    This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…

  13. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  14. Facial and prosodic emotion recognition in social anxiety disorder.

    Science.gov (United States)

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  15. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming.

    Science.gov (United States)

    Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara

    2010-03-01

    We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.

  16. Prosodic Abilities of Spanish-Speaking Adolescents and Adults with Williams Syndrome

    Science.gov (United States)

    Martinez-Castilla, Pastora; Sotillo, Maria; Campos, Ruth

    2011-01-01

    In spite of the relevant role of prosody in communication, and in contrast with other linguistic components, there is paucity of research in this field for Williams syndrome (WS). Therefore, this study performed a systematic assessment of prosodic abilities in WS. The Spanish version of the Profiling Elements of Prosody in Speech-Communication…

  17. Communication Access for Deaf People in Healthcare Settings: Understanding the Work of American Sign Language Interpreters.

    Science.gov (United States)

    Olson, Andrea M; Swabey, Laurie

    Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.

  18. Balance Toward Language Mastery

    Directory of Open Access Journals (Sweden)

    Virginia R. Heslinga

    2017-01-01

    Full Text Available Problems in attaining language mastery with students from diverse language backgrounds and levels of ability confront educators around the world. Experiments, research, and experience see positive effects of adding sign language in communication methods to pre-school and K-12 education. Augmentative, alternative, interactive, accommodating, and enriching strategies using sign language aid learners in balancing the skills needed to mastery of one language or multiple languages. Theories of learning that embrace play, drama, motion, repetition, socializing, and self-efficacy connect to the options for using sign language with learners in inclusive and mainstream classes. The methodical use of sign language by this researcher-educator over two and a half decades showed signing does build thinking skills, add enjoyment, stimulate communication, expand comprehension, increase vocabulary acquisition, encourage collaboration, and helps build appreciation for cultural diversity.

  19. Prosody-Syntax Integration in a Second Language: Contrasting Event-Related Potentials from German and Chinese Learners of English Using Linear Mixed Effect Models

    Science.gov (United States)

    Nickels, Stefanie; Steinhauer, Karsten

    2018-01-01

    The role of prosodic information in sentence processing is not usually addressed in second language (L2) instruction, and neurocognitive studies on prosody-syntax interactions are rare. Here we compare event-related potentials (ERP) of Chinese and German learners of English L2 to those of native English speakers and show how first language (L1)…

  20. British Sign Name Customs

    Science.gov (United States)

    Day, Linda; Sutton-Spence, Rachel

    2010-01-01

    Research presented here describes the sign names and the customs of name allocation within the British Deaf community. While some aspects of British Sign Language sign names and British Deaf naming customs differ from those in most Western societies, there are many similarities. There are also similarities with other societies outside the more…

  1. A preliminary look at negative constructions in South African Sign ...

    African Journals Online (AJOL)

    How negation is expressed by means of manual and/or non-manual markers has been described in a wide range of sign languages. This work has suggested a split between sign languages requiring a manual negative element in negative clauses (manual dominant sign languages) and those where a non-manual marker ...

  2. Prosody as a Tool for Assessing Reading Fluency of Adult ESL Students

    Directory of Open Access Journals (Sweden)

    Seftirina Evina Sinambela

    2017-12-01

    Full Text Available The prosodic features in reading aloud assignment has been associated with the students’ decoding skill. The goal of the present study is to determine the reliability of prosody for assessing reading fluency of adult ESL students in Indonesia context. The participants were all Indonesian natives, undergraduate students, adult females and males who have learned English in school (at the very least twice a week for more than 12 years. Text reading prosody was assessed by reading aloud task and the students’ speaking manner was taped and measured by using the Multidimensional Fluency Scale, as for text comprehension was assessed with a standardized test. It was discovered by the current study that prosody is a reliable sign to determine reading fluency and also reading comprehension. The student who did not read the text prosodically (with appropriate expression actually showed that he/she failed to comprehend the text. This study also revealed that a struggling reader was also having low comprehension capacity in listening spoken texts. The ESL students’ common problems to acquire prosodic reading skill are low exposure to the target language and do not have a good model to imitate prosodic reading.

  3. American Sign Language Alphabet Recognition Using a Neuromorphic Sensor and an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Rivera-Acosta

    2017-09-01

    Full Text Available This paper reports the design and analysis of an American Sign Language (ASL alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.

  4. A contrastive analysis of the sound structure of Sotho-Tswana for ...

    African Journals Online (AJOL)

    The paper addresses second language teaching of phonetic, phonological and prosodic features in the Sotho-Tswana languages (Southern Bantu) from a linguistic perspective. It motivates the inclusion of phonetic, phonological and prosodic background knowledge in second language teaching, and singles out potential ...

  5. Using American sign language interpreters to facilitate research among deaf adults: lessons learned.

    Science.gov (United States)

    Sheppard, Kate

    2011-04-01

    Health care providers commonly discuss depressive symptoms with clients, enabling earlier intervention. Such discussions rarely occur between providers and Deaf clients. Most culturally Deaf adults experience early-onset hearing loss, self-identify as part of a unique culture, and communicate in the visual language of American Sign Language (ASL). Communication barriers abound, and depression screening instruments may be unreliable. To train and use ASL interpreters for a qualitative study describing depressive symptoms among Deaf adults. Training included research versus community interpreting. During data collection, interpreters translated to and from voiced English and ASL. Training eliminated potential problems during data collection. Unexpected issues included participants asking for "my interpreter" and worrying about confidentiality or friendship in a small community. Lessons learned included the value of careful training of interpreters prior to initiating data collection, including resolution of possible role conflicts and ensuring conceptual equivalence in real-time interpreting.

  6. Phonological abilities in literacy-impaired children: Brain potentials reveal deficient phoneme discrimination, but intact prosodic processing

    Directory of Open Access Journals (Sweden)

    Claudia Männel

    2017-02-01

    Full Text Available Intact phonological processing is crucial for successful literacy acquisition. While individuals with difficulties in reading and spelling (i.e., developmental dyslexia are known to experience deficient phoneme discrimination (i.e., segmental phonology, findings concerning their prosodic processing (i.e., suprasegmental phonology are controversial. Because there are no behavior-independent studies on the underlying neural correlates of prosodic processing in dyslexia, these controversial findings might be explained by different task demands. To provide an objective behavior-independent picture of segmental and suprasegmental phonological processing in impaired literacy acquisition, we investigated event-related brain potentials during passive listening in typically and poor-spelling German school children. For segmental phonology, we analyzed the Mismatch Negativity (MMN during vowel length discrimination, capturing automatic auditory deviancy detection in repetitive contexts. For suprasegmental phonology, we analyzed the Closure Positive Shift (CPS that automatically occurs in response to prosodic boundaries. Our results revealed spelling group differences for the MMN, but not for the CPS, indicating deficient segmental, but intact suprasegmental phonological processing in poor spellers. The present findings point towards a differential role of segmental and suprasegmental phonology in literacy disorders and call for interventions that invigorate impaired literacy by utilizing intact prosody in addition to training deficient phonemic awareness.

  7. The signer and the sign: cortical correlates of person identity and language processing from point-light displays.

    Science.gov (United States)

    Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J

    2011-09-01

    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer

  8. The Effectiveness of the Game-Based Learning System for the Improvement of American Sign Language Using Kinect

    Science.gov (United States)

    Kamnardsiri, Teerawat; Hongsit, Ler-on; Khuwuthyakorn, Pattaraporn; Wongta, Noppon

    2017-01-01

    This paper investigated students' achievement for learning American Sign Language (ASL), using two different methods. There were two groups of samples. The first experimental group (Group A) was the game-based learning for ASL, using Kinect. The second control learning group (Group B) was the traditional face-to-face learning method, generally…

  9. Visualizing Patient Journals by Combining Vital Signs Monitoring and Natural Language Processing

    DEFF Research Database (Denmark)

    Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten

    2016-01-01

    This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....

  10. Limits of visual communication: the effect of signal-to-noise ratio on the intelligibility of American Sign Language.

    Science.gov (United States)

    Pavel, M; Sperling, G; Riedl, T; Vanderbeek, A

    1987-12-01

    To determine the limits of human observers' ability to identify visually presented American Sign Language (ASL), the contrast s and the amount of additive noise n in dynamic ASL images were varied independently. Contrast was tested over a 4:1 range; the rms signal-to-noise ratios (s/n) investigated were s/n = 1/4, 1/2, 1, and infinity (which is used to designate the original, uncontaminated images). Fourteen deaf subjects were tested with an intelligibility test composed of 85 isolated ASL signs, each 2-3 sec in length. For these ASL signs (64 x 96 pixels, 30 frames/sec), subjects' performance asymptotes between s/n = 0.5 and 1.0; further increases in s/n do not improve intelligibility. Intelligibility was found to depend only on s/n and not on contrast. A formulation in terms of logistic functions was proposed to derive intelligibility of ASL signs from s/n, sign familiarity, and sign difficulty. Familiarity (ignorance) is represented by additive signal-correlated noise; it represents the likelihood of a subject's knowing a particular ASL sign, and it adds to s/n. Difficulty is represented by a multiplicative difficulty coefficient; it represents the perceptual vulnerability of an ASL sign to noise and it adds to log(s/n).

  11. Deaf New Zealand Sign Language users' access to healthcare.

    Science.gov (United States)

    Witko, Joanne; Boyles, Pauline; Smiler, Kirsten; McKee, Rachel

    2017-12-01

    The research described was undertaken as part of a Sub-Regional Disability Strategy 2017-2022 across the Wairarapa, Hutt Valley and Capital and Coast District Health Boards (DHBs). The aim was to investigate deaf New Zealand Sign Language (NZSL) users' quality of access to health services. Findings have formed the basis for developing a 'NZSL plan' for DHBs in the Wellington sub-region. Qualitative data was collected from 56 deaf participants and family members about their experiences of healthcare services via focus group, individual interviews and online survey, which were thematically analysed. Contextual perspective was gained from 57 healthcare professionals at five meetings. Two professionals were interviewed, and 65 staff responded to an online survey. A deaf steering group co-designed the framework and methods, and validated findings. Key issues reported across the health system include: inconsistent interpreter provision; lack of informed consent for treatment via communication in NZSL; limited access to general health information in NZSL and the reduced ability of deaf patients to understand and comply with treatment options. This problematic communication with NZSL users echoes international evidence and other documented local evidence for patients with limited English proficiency. Deaf NZSL users face multiple barriers to equitable healthcare, stemming from linguistic and educational factors and inaccessible service delivery. These need to be addressed through policy and training for healthcare personnel that enable effective systemic responses to NZSL users. Deaf participants emphasise that recognition of their identity as members of a language community is central to improving their healthcare experiences.

  12. Preservice Teacher and Interpreter American Sign Language Abilities: Self-Evaluations and Evaluations of Deaf Students' Narrative Renditions

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.; Scheetz, Nanci A.

    2015-01-01

    In deaf education, the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and…

  13. Do Persian Native Speakers Prosodically Mark Wh-in-situ Questions?

    Science.gov (United States)

    Shiamizadeh, Zohreh; Caspers, Johanneke; Schiller, Niels O

    2018-02-01

    It has been shown that prosody contributes to the contrast between declarativity and interrogativity, notably in interrogative utterances lacking lexico-syntactic features of interrogativity. Accordingly, it may be proposed that prosody plays a role in marking wh-in-situ questions in which the interrogativity feature (the wh-phrase) does not move to sentence-initial position, as, for example, in Persian. This paper examines whether prosody distinguishes Persian wh-in-situ questions from declaratives in the absence of the interrogativity feature in the sentence-initial position. To answer this question, a production experiment was designed in which wh-questions and declaratives were elicited from Persian native speakers. On the basis of the results of previous studies, we hypothesize that prosodic features mark wh-in-situ questions as opposed to declaratives at both the local (pre- and post-wh part) and global level (complete sentence). The results of the current study confirm our hypothesis that prosodic correlates mark the pre-wh part as well as the complete sentence in wh-in-situ questions. The results support theoretical concepts such as the frequency code, the universal dichotomous association between relaxation and declarativity on the one hand and tension and interrogativity on the other, the relation between prosody and pragmatics, and the relation between prosody and encoding and decoding of sentence type.

  14. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    Science.gov (United States)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  15. Acquisition of stress and pitch accent in English-Spanish bilingual children

    Science.gov (United States)

    Kim, Sahyang; Andruski, Jean; Nathan, Geoffrey S.; Casielles, Eugenia; Work, Richard

    2005-09-01

    Although understanding of prosodic development is considered crucial for understanding of language acquisition in general, few studies have focused on how children develop native-like prosody in their speech production. This study will examine the acquisition of lexical stress and postlexical pitch accent in two English-Spanish bilingual children. Prosodic characteristics of English and Spanish are different in terms of frequent stress patterns (trochaic versus penultimate), phonetic realization of stress (reduced unstressed vowel versus full unstressed vowel), and frequent pitch accent types (H* versus L*+H), among others. Thus, English-Spanish bilingual children's prosodic development may provide evidence of their awareness of language differences relatively early during language development, and illustrate the influence of markedness or input frequency in prosodic acquisition. For this study, recordings from the children's one-word stage are used. Durations of stressed and unstressed syllables and F0 peak alignment are measured, and pitch accent types in different accentual positions (nuclear versus prenuclear) are transcribed using American English ToBI and Spanish ToBI. Prosodic development is compared across ages within each language and across languages at each age. Furthermore, the bilingual children's productions are compared with monolingual English and Spanish parents' productions.

  16. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Tell-Tale Signs: Reflection towards the Acquisition of Academic ...

    African Journals Online (AJOL)

    Tell-Tale Signs: Reflection towards the Acquisition of Academic Discourses as Second Languages. ... Stellenbosch Papers in Linguistics ... After enrolling in a sign language course, we – lecturers teaching academic discourses – decided to explore this phenomenon and determine the implications for pedagogical practice.

  18. Type of iconicity matters in the vocabulary development of signing children

    NARCIS (Netherlands)

    Ortega, G.; Sümer, B.; Özyürek, A.

    2017-01-01

    Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children's preferences for certain types of sign-referent links during

  19. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  20. 'And' or 'or': General use coordination in ASL

    Directory of Open Access Journals (Sweden)

    Kathryn Davidson

    2013-08-01

    Full Text Available In American Sign Language (ASL, conjunction (‘and’ and disjunction (‘or’ are often conveyed by the same general use coordinator (transcribed as “COORD”. So the sequence of signs MARY WANT TEA COORD COFFEE can be interpreted as ‘Mary wants tea or coffee’ or ‘Mary wants tea and coffee’ depending on contextual, prosodic, or other lexical cues. This paper takes the first steps in describing the syntax and semantics of two general use coordinators in ASL, finding that they have a similar syntactic distribution to English coordinators and and or. Semantically, arguments are made against an ambiguity approach to account for the conjunctive and disjunctive readings; instead, I propose a Hamblin-style alternative semantics where the disjunctive and conjunctive force comes from external quantification over a set of alternatives. The pragmatic consequences of using only a prosodic distinction between disjunction from conjunction is examined via a felicity judgement study of scalar implicatures. Results indicate decreased scalar implicatures when COORD is used as disjunction, supporting the semantic analysis and suggesting that the contrast of lexical items in the scale plays an important role in its pragmatics. Extensions to other languages with potential general use coordination are discussed. http://dx.doi.org/10.3765/sp.6.4 BibTeX info

  1. Experience with a second language affects the use of fundamental frequency in speech segmentation

    Science.gov (United States)

    Broersma, Mirjam; Cho, Taehong; Kim, Sahyang; Martínez-García, Maria Teresa; Connell, Katrina

    2017-01-01

    This study investigates whether listeners’ experience with a second language learned later in life affects their use of fundamental frequency (F0) as a cue to word boundaries in the segmentation of an artificial language (AL), particularly when the cues to word boundaries conflict between the first language (L1) and second language (L2). F0 signals phrase-final (and thus word-final) boundaries in French but word-initial boundaries in English. Participants were functionally monolingual French listeners, functionally monolingual English listeners, bilingual L1-English L2-French listeners, and bilingual L1-French L2-English listeners. They completed the AL-segmentation task with F0 signaling word-final boundaries or without prosodic cues to word boundaries (monolingual groups only). After listening to the AL, participants completed a forced-choice word-identification task in which the foils were either non-words or part-words. The results show that the monolingual French listeners, but not the monolingual English listeners, performed better in the presence of F0 cues than in the absence of such cues. Moreover, bilingual status modulated listeners’ use of F0 cues to word-final boundaries, with bilingual French listeners performing less accurately than monolingual French listeners on both word types but with bilingual English listeners performing more accurately than monolingual English listeners on non-words. These findings not only confirm that speech segmentation is modulated by the L1, but also newly demonstrate that listeners’ experience with the L2 (French or English) affects their use of F0 cues in speech segmentation. This suggests that listeners’ use of prosodic cues to word boundaries is adaptive and non-selective, and can change as a function of language experience. PMID:28738093

  2. Schooling in American Sign Language: A Paradigm Shift from a Deficit Model to a Bilingual Model in Deaf Education

    Science.gov (United States)

    Humphries, Tom

    2013-01-01

    Deaf people have long held the belief that American Sign Language (ASL) plays a significant role in the academic development of deaf children. Despite this, the education of deaf children has historically been exclusive of ASL and constructed as an English-only, deficit-based pedagogy. Newer research, however, finds a strong correlation between…

  3. Reading books with young deaf children: strategies for mediating between American Sign Language and English.

    Science.gov (United States)

    Berke, Michele

    2013-01-01

    Research on shared reading has shown positive results on children's literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children's visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children's literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques

  4. Language choice in bimodal bilingual development

    Directory of Open Access Journals (Sweden)

    Diane eLillo-Martin

    2014-10-01

    Full Text Available Bilingual children develop sensitivity to the language used by their interlocutors at an early age, reflected in differential use of each language by the child depending on their interlocutor. Factors such as discourse context and relative language dominance in the community may mediate the degree of language differentiation in preschool age children.Bimodal bilingual children, acquiring both a sign language and a spoken language, have an even more complex situation. Their Deaf parents vary considerably in access to the spoken language. Furthermore, in addition to code-mixing and code-switching, they use code-blending – expressions in both speech and sign simultaneously – an option uniquely available to bimodal bilinguals. Code-blending is analogous to code-switching sociolinguistically, but is also a way to communicate without suppressing one language. For adult bimodal bilinguals, complete suppression of the non-selected language is cognitively demanding. We expect that bimodal bilingual children also find suppression difficult, and use blending rather than suppression in some contexts. We also expect relative community language dominance to be a factor in children’s language choices.This study analyzes longitudinal spontaneous production data from four bimodal bilingual children and their Deaf and hearing interlocutors. Even at the earliest observations, the children produced more signed utterances with Deaf interlocutors and more speech with hearing interlocutors. However, while three of the four children produced >75% speech alone in speech target sessions, they produced <25% sign alone in sign target sessions. All four produced bimodal utterances in both, but more frequently in the sign sessions, potentially because they find suppression of the dominant language more difficult.Our results indicate that these children are sensitive to the language used by their interlocutors, while showing considerable influence from the dominant

  5. A Comparison of Discrete Trial Teaching with and without Gestures/Signs in Teaching Receptive Language Skills to Children with Autism

    Science.gov (United States)

    Kurt, Onur

    2011-01-01

    The present study was designed to compare the effectiveness and efficiency of two discrete trial teaching procedures for teaching receptive language skills to children with autism. While verbal instructions were delivered alone during the first procedure, all verbal instructions were combined with simple gestures and/or signs during the second…

  6. Comparative analysis of rhytmic abilities in children with and without speech and language disorders

    OpenAIRE

    Frangež, Mateja

    2014-01-01

    Music and speech consist of sound, acoustic undulation, and contain structural patterns of pitch, duration and magnitude. Processing both music and prosodic patterns take place in some joint neural areas, but there is also significant difference in the representation of speech and music in the brain. Music and language are markedly different in their form, function and use of syntactic structures. Complex structure and functioning of the brain enable music to expand its impact on other areas ...

  7. Reproducing American Sign Language Sentences: Cognitive Scaffolding in Working Memory

    Directory of Open Access Journals (Sweden)

    Ted eSupalla

    2014-08-01

    Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are

  8. Cultural transmission through infant signs: Objects and actions in U.S. and Taiwan.

    Science.gov (United States)

    Wang, Wen; Vallotton, Claire

    2016-08-01

    Infant signs are intentionally taught/learned symbolic gestures which can be used to represent objects, actions, requests, and mental state. Through infant signs, parents and infants begin to communicate specific concepts earlier than children's first spoken language. This study examines whether cultural differences in language are reflected in children's and parents' use of infant signs. Parents speaking East Asian languages with their children utilize verbs more often than do English-speaking mothers; and compared to their English-learning peers, Chinese children are more likely to learn verbs as they first acquire spoken words. By comparing parents' and infants' use of infant signs in the U.S. and Taiwan, we investigate cultural differences of noun/object versus verb/action bias before children's first language. Parents reported their own and their children's use of first infant signs retrospectively. Results show that cultural differences in parents' and children's infant sign use were consistent with research on early words, reflecting cultural differences in communication functions (referential versus regulatory) and child-rearing goals (independent versus interdependent). The current study provides evidence that intergenerational transmission of culture through symbols begins prior to oral language. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Words Recognized as Units: Systematic Signs.

    Science.gov (United States)

    Carlin, John

    1997-01-01

    This historical article proposes that students with deafness in the early grades should be taught easy and familiar words by appropriate sign-language gestures on the fingers and by writing, and that the simple rules of grammar should be explained in the signs in the order of the words. (CR)

  10. Conséquences cognitives des transferts en langue des signes

    OpenAIRE

    Courtin, Cyril; Sallandre, Marie-Anne

    2015-01-01

    National audience; The aim of this study is to bring new elements to the debate on the relationship between language and thought through the study of a sign language. This paper combines in a pioneering way some key concepts of cognitive psychology and linguistics of French Sign Language. We tried to determine how personal transfers (also called role taking) influence cognitive development in two specific areas: theories of mind and cognitive flexibility. In order to do this, an experimental ...

  11. L’unité intonative dans les textes oralisés // Intonation unit in read speech

    Directory of Open Access Journals (Sweden)

    Lea Tylečková

    2015-12-01

    Full Text Available Prosodic phrasing, i.e. division of speech into intonation units, represents a phenomenon which is central to language comprehension. Incorrect prosodic boundary markings may lead to serious misunderstandings and ambiguous interpretations of utterances. The present paper investigates prosodic competencies of Czech students of French in the domain of prosodic phrasing in French read speech. Two texts of different length are examined through a perceptual method to observe how Czech speakers of French (B1–B2 level of CEFR divide read speech into prosodic units compared to French native speakers.

  12. History of the College of the Holy Cross American Sign Language Program and Its Collaborative Partnerships with the Worcester Deaf Community

    Science.gov (United States)

    Fisher, Jami N.

    2014-01-01

    Most postsecondary American Sign Language programs have an inherent connection to their local Deaf communities and rely on the community's events to provide authentic linguistic and cultural experiences for their students. While this type of activity benefits students, there is often little effort toward meaningful engagement or attention to…

  13. A Low-Cost Open Source 3D-Printable Dexterous Anthropomorphic Robotic Hand with a Parallel Spherical Joint Wrist for Sign Languages Reproduction

    Directory of Open Access Journals (Sweden)

    Andrea Bulgarelli

    2016-06-01

    Full Text Available We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages’ hand poses for deaf and deaf-blind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community.

  14. Data from Russian Help to Determine in Which Languages the Possible Word Constraint Applies.

    Science.gov (United States)

    Alexeeva, Svetlana; Frolova, Anastasia; Slioussar, Natalia

    2017-06-01

    The Possible Word Constraint, or PWC, is a speech segmentation principle prohibiting to postulate word boundaries if a remaining segment contains only consonants. The PWC was initially formulated for English where all words contain a vowel and claimed to hold universally after being confirmed for various other languages. However, it is crucial to look at languages that allow for words without vowels. Two such languages have been tested: data from Slovak were compatible with the PWC, while data from Tarifiyt Berber did not support it. We hypothesize that the fixed word stress could influence the results in Slovak and report two word-spotting experiments on Russian, which has similar one-consonant words, but flexible word stress. The results contradict the PWC, so we suggest that it does not operate in the languages where words without vowels are possible, while the results from Slovak might be explained by its prosodic properties.

  15. Basic Color Terms in Estonian Sign Language

    Science.gov (United States)

    Hollman, Liivi; Sutrop, Urmas

    2011-01-01

    The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…

  16. Towards the Development of a Mexican Speech-to-Sign-Language Translator for the Deaf Community

    Directory of Open Access Journals (Sweden)

    Santiago-Omar Caballero-Morales

    2012-03-01

    Full Text Available Una parte significativa de la población mexicana es sorda. Esta discapacidad restringe sus habilidades de interacción social con personas que no tienen dicha discapacidad y viceversa. En este artículo presentamos nuestros avances hacia el desarrollo de un traductor Voz-a-Lenguaje-de-Señas del español mexicano para asistir a personas sin discapacidad a interactuarcon personas sordas. La metodología de diseño propuesta considera limitados recursos para(1 el desarrollo del Reconocedor Automático del Habla (RAH mexicano, el cual es el módulo principal del traductor, y (2 el vocabulario del Lenguaje de Señas Mexicano (LSM disponible para representar las oraciones reconocidas. La traducción Voz-a-Lenguaje-de-Señas fue lograda con un nivel de precisión mayor al 97% para usuarios de prueba diferentes de aquellos seleccionados para el entrenamiento del RAH.A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1 the development of the Mexican Automatic Speech Recogniser (ASRsystem, which is the main module in the translator, and (2 the Mexican Sign Language(MSL vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.

  17. Prosody and alignment: a sequential perspective

    Science.gov (United States)

    Szczepek Reed, Beatrice

    2010-12-01

    In their analysis of a corpus of classroom interactions in an inner city high school, Roth and Tobin describe how teachers and students accomplish interactional alignment by prosodically matching each other's turns. Prosodic matching, and specific prosodic patterns are interpreted as signs of, and contributions to successful interactional outcomes and positive emotions. Lack of prosodic matching, and other specific prosodic patterns are interpreted as features of unsuccessful interactions, and negative emotions. This forum focuses on the article's analysis of the relation between interpersonal alignment, emotion and prosody. It argues that prosodic matching, and other prosodic linking practices, play a primarily sequential role, i.e. one that displays the way in which participants place and design their turns in relation to other participants' turns. Prosodic matching, rather than being a conversational action in itself, is argued to be an interactional practice (Schegloff 1997), which is not always employed for the accomplishment of `positive', or aligning actions.

  18. The Cognitive Neuroscience of Sign Language: Engaging Undergraduate Students' Critical Thinking Skills Using the Primary Literature.

    Science.gov (United States)

    Stevens, Courtney

    2015-01-01

    This article presents a modular activity on the neurobiology of sign language that engages undergraduate students in reading and analyzing the primary functional magnetic resonance imaging (fMRI) literature. Drawing on a seed empirical article and subsequently published critique and rebuttal, students are introduced to a scientific debate concerning the functional significance of right-hemisphere recruitment observed in some fMRI studies of sign language processing. The activity requires minimal background knowledge and is not designed to provide students with a specific conclusion regarding the debate. Instead, the activity and set of articles allow students to consider key issues in experimental design and analysis of the primary literature, including critical thinking regarding the cognitive subtractions used in blocked-design fMRI studies, as well as possible confounds in comparing results across different experimental tasks. By presenting articles representing different perspectives, each cogently argued by leading scientists, the readings and activity also model the type of debate and dialogue critical to science, but often invisible to undergraduate science students. Student self-report data indicate that undergraduates find the readings interesting and that the activity enhances their ability to read and interpret primary fMRI articles, including evaluating research design and considering alternate explanations of study results. As a stand-alone activity completed primarily in one 60-minute class block, the activity can be easily incorporated into existing courses, providing students with an introduction both to the analysis of empirical fMRI articles and to the role of debate and critique in the field of neuroscience.

  19. Applying prosodic speech features in mental health care: An exploratory study in a life-review intervention for depression

    NARCIS (Netherlands)

    Lamers, S.M.A.; Truong, Khiet Phuong; Steunenberg, B.; Steunenberg, B.; de Jong, Franciska M.G.; Westerhof, Gerben Johan

    2014-01-01

    The present study aims to investigate the application of prosodic speech features in a psychological intervention based on lifereview. Several studies have shown that speech features can be used as indicators of depression severity, but these studies are mainly based on controlled speech recording

  20. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.