WorldWideScience

Sample records for american sign language

  1. American Sign Language

    Science.gov (United States)

    ... combined with facial expressions and postures of the body. It is the primary language of many North Americans who are deaf and ... their eyebrows, widening their eyes, and tilting their bodies forward. Just as with other languages, specific ways of expressing ideas in ASL vary ...

  2. Phonological Awareness for American Sign Language

    Science.gov (United States)

    Corina, David P.; Hafer, Sarah; Welch, Kearnan

    2014-01-01

    This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…

  3. American Sign Language and Pidgin Sign English: What's the Difference?

    Science.gov (United States)

    Reilly, Judy; McIntire, Marina L.

    1980-01-01

    The differences between Pidgin Sign English and American Sign Language in simultaneity, or the visible presence of two or more linguistic units (manual or nonmanual) co-occurring, are demonstrated. Differences are exemplified in handshape-classifier pronouns, directional verbs, co-occurring manual signs, and nonmanual behavior. (PMJ)

  4. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  5. Adaptation of a Vocabulary Test from British Sign Language to American Sign Language

    Science.gov (United States)

    Mann, Wolfgang; Roy, Penny; Morgan, Gary

    2016-01-01

    This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…

  6. New Perspectives on the History of American Sign Language

    Science.gov (United States)

    Shaw, Emily; Delaporte, Yves

    2011-01-01

    Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…

  7. Adaptation of a Vocabulary Test from British Sign Language to American Sign Language

    OpenAIRE

    Mann, W.; Roy, P; Morgan, G

    2016-01-01

    This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with twenty deaf native ASL signers. The web-based test assesses the strength of deaf children’s vocabulary knowledge by means of different mappings of phonological form and meaning of signs. The adaptation from BSL to ASL involved nine stages, which included forming a panel of deaf/hearing exper...

  8. Adapting the Assessing British Sign Language Development: Receptive Skills Test into American sign language.

    Science.gov (United States)

    Enns, Charlotte J; Herman, Rosalind C

    2011-01-01

    Signed languages continue to be a key element of deaf education programs that incorporate a bilingual approach to teaching and learning. In order to monitor the success of bilingual deaf education programs, and in particular to monitor the progress of children acquiring signed language, it is essential to develop an assessment tool of signed language skills. Although researchers have developed some checklists and experimental tests related to American Sign Language (ASL) assessment, at this time a standardized measure of ASL does not exist. There have been tests developed in other signed languages, for example, British Sign Language, that can serve as models in this area. The purpose of this study was to adapt the Assessing British Sign Language Development: Receptive Skills Test for use in ASL in order to begin the process of developing a standardized measure of ASL skills. The results suggest that collaboration between researchers in different signed languages can provide a valuable contribution toward filling the gap in the area of signed language assessment.

  9. Statistical Sign Language Machine Translation: from English written text to American Sign Language Gloss

    CERN Document Server

    Othman, Achraf

    2011-01-01

    This works aims to design a statistical machine translation from English text to American Sign Language (ASL). The system is based on Moses tool with some modifications and the results are synthesized through a 3D avatar for interpretation. First, we translate the input text to gloss, a written form of ASL. Second, we pass the output to the WebSign Plug-in to play the sign. Contributions of this work are the use of a new couple of language English/ASL and an improvement of statistical machine translation based on string matching thanks to Jaro-distance.

  10. American Sign Language Comprehension Test: A Tool for Sign Language Researchers

    Science.gov (United States)

    Hauser, Peter C.; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B.; Emmorey, Karen; Contreras, Jessica

    2016-01-01

    The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf…

  11. The Nature of Object Marking in American Sign Language

    Science.gov (United States)

    Gokgoz, Kadir

    2013-01-01

    In this dissertation, I examine the nature of object marking in American Sign Language (ASL). I investigate object marking by means of directionality (the movement of the verb towards a certain location in signing space) and by means of handling classifiers (certain handshapes accompanying the verb). I propose that object marking in ASL is…

  12. ASL-LEX: A lexical database of American Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen

    2016-05-18

    ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .

  13. American Sign Language Verb Categories in Constructed Action

    Science.gov (United States)

    Rogers, K. Larry

    2012-01-01

    The American Sign Language construction commonly known as "role-shift" (referred to afterward as Constructed Action) superficially resembles mimic forms, however unlike mime, Constructed Action is a type of depicting construction in ASL discourse (Roy 1989). The signer may use eye gaze, head shift, facial expression, stylistic variation,…

  14. Ideological Barriers to American Sign Language: Unpacking Linguistic Resistance

    Science.gov (United States)

    Reagan, Timothy

    2011-01-01

    This article addresses the debate about the status of American Sign Language (ASL) as an example of ideological beliefs that impact linguistic judgments and policies. It also discusses the major challenges to the status of ASL with respect to formal legislative recognition, its utility as a medium of instruction, and its status as a legitimate…

  15. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  16. Social construction of American sign language--English interpreters.

    Science.gov (United States)

    McDermid, Campbell

    2009-01-01

    Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English interpreters, such as learners' age and education and the characteristics of good citizens within the Deaf community. According to the participants, younger students were adept at language acquisition, whereas older learners more readily understood the purpose of lessons. Children of deaf adults were seen as more culturally aware. The participants' beliefs echoed the theories of P. Freire (1970/1970) that educators consider the reality of each student and their praxis and were responsible for facilitating student self-awareness. Important characteristics in the social construction of students included independence, an appropriate attitude, an understanding of Deaf culture, ethical behavior, community involvement, and a willingness to pursue lifelong learning.

  17. The Multimedia Dictionary of American Sign Language: Learning Lessons About Language, Technology, and Business.

    Science.gov (United States)

    Wilcox, Sherman

    2003-01-01

    Reports on the the Multimedia Dictionary of American Sign language, which was was conceived in he late 1980s as a melding of the pioneering work in American Sign language lexicography that had been carried out decades earlier and the newly emerging computer technologies that were integrating use of graphical user-interface designs, rapidly…

  18. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.

    Science.gov (United States)

    Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I

    2016-03-01

    One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience.

  19. Adapting the "Assessing British Sign Language Development: Receptive Skills Test" into American Sign Language

    Science.gov (United States)

    Enns, Charlotte J.; Herman, Rosalind C.

    2011-01-01

    Signed languages continue to be a key element of deaf education programs that incorporate a bilingual approach to teaching and learning. In order to monitor the success of bilingual deaf education programs, and in particular to monitor the progress of children acquiring signed language, it is essential to develop an assessment tool of signed…

  20. Controversy within Sign Language.

    Science.gov (United States)

    Vernon, McCay

    1987-01-01

    A review of problems with using such manual communication systems as cued speech, fingerspelling, Signed or Manual English, American Sign Language, and Pidgin Sign provides a rationale for using a combination of American Sign Language and Pidgin Sign and a few markers from Signed English for a Total Communication system. (CB)

  1. Languages Are More than Words: Spanish and American Sign Language in Early Childhood Settings

    Science.gov (United States)

    Sherman, Judy; Torres-Crespo, Marisel N.

    2015-01-01

    Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.

  2. Effects of Iconicity and Semantic Relatedness on Lexical Access in American Sign Language

    Science.gov (United States)

    Bosworth, Rain G.; Emmorey, Karen

    2010-01-01

    Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…

  3. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  4. Frequency of Occurrence and Information Entropy of American Sign Language

    CERN Document Server

    Chong, Andrew; Poor, H Vincent

    2009-01-01

    American Sign Language (ASL) uses a series of hand based gestures as a replacement for words to allow the deaf to communicate. Previous work has shown that although it takes longer to make signs than to say the equivalent words, on average sentences can be completed in about the same time. This leaves unresolved, however, precisely why that should be the case. This paper reports a determination of the empirical entropy and redundancy in the set of handshapes of ASL. In this context, the entropy refers to the average information content in a unit of data. It is found that the handshapes, as fundamental units of ASL, are less redundant than phonemes, the equivalent fundamental units of spoken English, and that their entropy is much closer to the maximum possible information content. This explains why the slower signs can produce sentences in the same time as speaking; the low redundancy compensates for the slow rate of sign production. In addition to this precise quantification, this work is also novel in its a...

  5. Second Language Acquisition across Modalities: Production Variability in Adult L2 Learners of American Sign Language

    Science.gov (United States)

    Hilger, Allison I.; Loucks, Torrey M. J.; Quinto-Pozos, David; Dye, Matthew W. G.

    2015-01-01

    A study was conducted to examine production variability in American Sign Language (ASL) in order to gain insight into the development of motor control in a language produced in another modality. Production variability was characterized through the spatiotemporal index (STI), which represents production stability in whole utterances and is a…

  6. American Sign Language Comprehension Test: A Tool for Sign Language Researchers.

    Science.gov (United States)

    Hauser, Peter C; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B; Emmorey, Karen; Contreras, Jessica

    2016-01-01

    The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf non-native signers, and hearing ASL students. The results revealed that the ASL-CT has good internal reliability (α = 0.834). Discriminant validity was established by demonstrating that deaf native signers performed significantly better than deaf non-native signers and hearing native signers. Concurrent validity was established by demonstrating that test results positively correlated with another measure of ASL ability (r = .715) and that hearing ASL students' performance positively correlated with the level of ASL courses they were taking (r = .726). Researchers can use the ASL-CT to characterize an individual's ASL comprehension skills, to establish a minimal skill level as an inclusion criterion for a study, to group study participants by ASL skill (e.g., proficient vs. nonproficient), or to provide a measure of ASL skill as a dependent variable.

  7. Assessing Health Literacy in Deaf American Sign Language Users.

    Science.gov (United States)

    McKee, Michael M; Paasche-Orlow, Michael K; Winters, Paul C; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas

    2015-01-01

    Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, health care messages, and health care communication, which, when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version (ASL-NVS). We found that 48% of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research.

  8. Deaf Students' Receptive and Expressive American Sign Language Skills: Comparisons and Relations

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2014-01-01

    This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…

  9. Event segmentation in a visual language: neural bases of processing American Sign Language predicates.

    Science.gov (United States)

    Malaia, Evie; Ranaweera, Ruwan; Wilbur, Ronnie B; Talavage, Thomas M

    2012-02-15

    Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia and Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et al., 2006), introducing the question of whether the neural regions used by ASL signers for sign language verb processing might be similar to those used by non-signers for event segmentation. The present study investigated the neural substrate of predicate perception and linguistic processing in ASL. Observed patterns of activation demonstrate that Deaf signers process telic verb signs as having higher phonological complexity as compared to atelic verb signs. These results, together with previous neuroimaging data on spoken and sign languages (Shetreet et al., 2010; Emmorey et al., 2009), illustrate a route for how a prominent perceptual-kinematic feature used for non-linguistic event segmentation might come to be processed as an abstract linguistic feature due to sign language exposure.

  10. Language between Bodies: A Cognitive Approach to Understanding Linguistic Politeness in American Sign Language

    Science.gov (United States)

    Roush, Daniel R.

    2011-01-01

    This article proposes an answer to the primary question of how the American Sign Language (ASL) community in the United States conceptualizes (im)politeness and its related notions. It begins with a review of evolving theoretical issues in research on (im)politeness and related methodological problems with studying (im)politeness in natural…

  11. Language Interdependence between American Sign Language and English: A Review of Empirical Studies

    Science.gov (United States)

    Rusher, Melissa Ausbrooks

    2012-01-01

    This study provides a contemporary definition of American Sign Language/English bilingual education (AEBE) and outlines an essential theoretical framework. Included is a history and evolution of the methodology. The author also summarizes the general findings of twenty-six (26) empirical studies conducted in the United States that directly or…

  12. Reproducing American Sign Language Sentences: Cognitive Scaffolding in Working Memory

    Directory of Open Access Journals (Sweden)

    Ted eSupalla

    2014-08-01

    Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are

  13. Health websites: accessibility and usability for American sign language users.

    Science.gov (United States)

    Kushalnagar, Poorna; Naturale, Joan; Paludneviciene, Raylene; Smith, Scott R; Werfel, Emily; Doolittle, Richard; Jacobs, Stephen; DeCaro, James

    2015-01-01

    To date, there have been efforts toward creating better health information access for Deaf American Sign Language (ASL) users. However, the usability of websites with access to health information in ASL has not been evaluated. Our article focuses on the usability of four health websites that include ASL videos. We seek to obtain ASL users' perspectives on the navigation of these ASL-accessible websites, finding the health information that they needed, and perceived ease of understanding ASL video content. ASL users (n = 32) were instructed to find specific information on four ASL-accessible websites, and answered questions related to (a) navigation to find the task, (b) website usability, and (c) ease of understanding ASL video content for each of the four websites. Participants also gave feedback on what they would like to see in an ASL health library website, including the benefit of added captioning and/or signer model to medical illustration of health videos. Participants who had lower health literacy had greater difficulty in finding information on ASL-accessible health websites. This article also describes the participants' preferences for an ideal ASL-accessible health website, and concludes with a discussion on the role of accessible websites in promoting health literacy in ASL users.

  14. Motivational and attitudinal orientations in learning American Sign Language.

    Science.gov (United States)

    Lang, H; Foster, S; Gustina, D; Mowl, G; Liu, Y

    1996-01-01

    Integrative motivation was found to correlate significantly with sign language proficiency of adult learners at a post-secondary program for deaf students. Instrumental motives, however, were perceived as less important. Higher achievement in ASL was also associated with a positive cultural attitude toward deaf people. Learning of ASL as a second language may be enhanced if instructors design strategies that build upon these cultural and integrative motives and provide rewarding experiences to adult learners.

  15. American Sign Language and Contemporary Deaf Studies in the United States.

    Science.gov (United States)

    Reagan, Timothy

    1986-01-01

    Major works on the history, structure, and teaching of American Sign Language (ASL) in the last quarter-century are reviewed, and studies of the culture of the deaf are outlined. Research on the linguistic nature of ASL is highlighted, and some attention is given to British Sign Language. (Author/MSE)

  16. Content Questions In American Sign Language: An RRG Analysis

    Science.gov (United States)

    2007-11-02

    A Teacher’s Resource on Grammar and Culture (1980). Other early standards include Liddell (1980b), a study of ASL syntax, and Padden (1988), an... Oaxaca , Mexico: (6) Dolakha Newari (Genetti 1994, in Dryer In press a) Dolakhā khā tuŋ lā-eu rā Dolakha talk EMPH speak...Language: A Teacher’s Resource on Grammar and Culture . Washington, D.C.: Gallaudet University Press. BATTISON, ROBBIN. 1978. Loan Signs from

  17. How Deaf American Sign Language/English Bilingual Children Become Proficient Readers: An Emic Perspective

    Science.gov (United States)

    Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.

    2014-01-01

    A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…

  18. COMPARATIVE ANALYSIS OF THE STRUCTURE OF THE AMERICAN AND MACEDONIAN SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Aleksandra KAROVSKA RISTOVSKA

    2014-09-01

    Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.

  19. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2016-01-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…

  20. Deaf students' receptive and expressive american sign language skills: comparisons and relations.

    Science.gov (United States)

    Beal-Alvarez, Jennifer S

    2014-10-01

    This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and were still developing across this age range. Students' expressive skills, specifically classifier production, increased with age but did not approach adult-like performance. On both measures, deaf children with deaf parents scored higher than their peers with hearing parents and many components of the measures significantly correlated. These results suggest that these two measures provide a well-rounded snapshot of individual students' American Sign Language skills.

  1. Issues in the modification of American sign language for instructional purposes.

    Science.gov (United States)

    Moores, D F

    1981-03-01

    The use of manual communication and sign languages for language education of autistic, deaf retarded, and mentally retarded individuals is receiving increasing attention by educators. Modifications of sign systems for this purpose emphasize simplicity, redundancy, and English word order. Effective utilization of manual communication for these populations requires a better understanding of the physical and linguistic bases of sign languages than now exists. Preliminary evidence from studies of oral-only, manual-only, and oral-manual modes of communication suggests that flexibility in utilizing all modes is the most effective teaching method. The present paper will consider the possible utilization of modifications of the American Sign Language for use in three general areas: instruction of deaf students in the classroom, communication between hearing parents and young deaf children, and communication with individuals with handicaps other than deafness.

  2. Longitudinal Receptive American Sign Language Skills across a Diverse Deaf Student Body

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2016-01-01

    This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…

  3. American Sign Language and Deaf Culture Competency of Osteopathic Medical Students

    Science.gov (United States)

    Lapinsky, Jessica; Colonna, Caitlin; Sexton, Patricia; Richard, Mariah

    2015-01-01

    The study examined the effectiveness of a workshop on Deaf culture and basic medical American Sign Language for increasing osteopathic student physicians' confidence and knowledge when interacting with ASL-using patients. Students completed a pretest in which they provided basic demographic information, rated their confidence levels, took a video…

  4. American Sign Language-English Interpreting Program Faculty: Characteristics, Tenure Perceptions, and Productivity

    Science.gov (United States)

    Hale, Kimberly J.

    2012-01-01

    American Sign Language (ASL)-English interpreting education, which began as a community apprenticeship and vetting process, has within the last several decades moved into higher education. Most recently, the number of baccalaureate-granting ASL-English interpreting programs have continued to increase while the number of associate's degree…

  5. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language.

    Science.gov (United States)

    Williams, Joshua T; Newman, Sharlene D

    2016-04-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production.

  6. Validity of the Multidimensional Health Locus of Control Scales in American Sign Language

    OpenAIRE

    Athale, Ninad; Aldridge, Arianna; Malcarne, Vanessa L.; Nakaji, Melanie; Samady, Waheeda; Sadler, Georgia Robins

    2010-01-01

    Few instruments have been translated and validated for people who use American Sign Language (ASL) as their preferred language. This study examined the reliability and validity of a new ASL version of the widely-used Multidimensional Health Locus of Control (MHLC) scales. Deaf individuals (N = 311) were shown the ASL version via videotape, and their responses were recorded. Confirmatory factor analysis supported the four-factor structure of the MHLC. Scale reliabilities (Cronbach’s alphas) ra...

  7. How deaf American Sign Language/English bilingual children become proficient readers: an emic perspective.

    Science.gov (United States)

    Mounty, Judith L; Pucci, Concetta T; Harmon, Kristen C

    2014-07-01

    A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from learning to read to reading to learn. Analysis of 12 interactive, semi-structured interviews identified informal and formal teaching and learning practices in ASL/English bilingual homes and classrooms. These practices value, reinforce, and support the bidirectional acquisition of both languages and provide a strong foundation for literacy.

  8. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    Science.gov (United States)

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  9. A Particle of Indefiniteness in American Sign Language

    Directory of Open Access Journals (Sweden)

    Carol Neidle

    2003-01-01

    Full Text Available We describe here the characteristics of a very frequently-occurring ASL indefinite focus particle, which has not previously been recognized as such. We show here that, despite its similarity to the question sign "WHAT", the particle is distinct from that sign in terms of articulation, function, and distribution. The particle serves to express "uncertainty" in various ways, which can be formalized semantically in terms of a domain-widening effect of the same sort as that proposed for English "any" by Kadmon & Landman (1993. Its function is to widen the domain of possibilities under consideration from the typical to include the non-typical as well, along a dimension appropriate in the context.

  10. The Effects of Electronic Communication on American Sign Language

    Science.gov (United States)

    Schneider, Erin; Kozak, L. Viola; Santiago, Roberto; Stephen, Anika

    2012-01-01

    Technological and language innovation often flow in concert with one another. Casual observation by researchers has shown that electronic communication memes, in the form of abbreviations, have found their way into spoken English. This study focuses on the current use of electronic modes of communication, such as cell smartphones, and e-mail, and…

  11. Lexical access in American Sign Language: an ERP investigation of effects of semantics and phonology.

    Science.gov (United States)

    Gutierrez, Eva; Williams, Deborah; Grosvald, Michael; Corina, David

    2012-08-15

    That language forms (phonology) are arbitrarily related to their meanings (semantics) is often considered a basic property of human languages. Naturally occurring sign languages, however, often appear to conflate form and meaning. In this paper we examine whether this close coupling has processing consequences for lexical access. We examine the electrophysiological correlates of on-line sentence processing in an attempt to clarify the time-course of lexical access in American Sign Language. EEG was recorded while 17 native signers watched ASL sentences for comprehension. Participants were presented with sentences in which semantic expectancy and phonological form were systematically manipulated to create four types of violations. These four conditions of interest are contrasted to a baseline sentence with a preferred semantic ending. Two different effects were observed in early time windows. Evidence for an early effect of semantic pre-activation of plausible candidates (150-250 ms) was found, followed by a negativity associated with lexical selection (350-450 ms) for only phonologically related (-S, +P) and for only semantically related (+S, -P) signs. These findings provide evidence for a novel mapping of signal form and meaning that may be a unique signature of sign language. In the 450 to 600 ms window, all conditions showed an increased N400 with respect to the expected ending, suggesting greater difficulty in semantic integration with the established context. Overall, these findings provide important insights into the on-line processing of visual-manual language.

  12. Response bias reveals enhanced attention to inferior visual field in signers of American Sign Language.

    Science.gov (United States)

    Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C

    2016-04-01

    Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.

  13. Dissociating linguistic and non-linguistic gesture processing: electrophysiological evidence from American Sign Language.

    Science.gov (United States)

    Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David

    2012-04-01

    A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity.

  14. Translation of the Multidimensional Health Locus of Control Scales for Users of American Sign Language

    OpenAIRE

    Samady, Waheeda; Sadler, Georgia Robins; Nakaji, Melanie; Malcarne, Vanessa L.; Trybus, Raymond; Athale, Ninad

    2008-01-01

    This paper describes the translation of the Multidimensional Health Locus of Control (MHLC) scales into American Sign Language (ASL). Translation is an essential first step toward validating the instrument for use in the Deaf community, a commonly overlooked minority community. This translated MHLC/ASL can be utilized by public health nurses researching the Deaf community to create and evaluate targeted health interventions. It can be used in clinical settings to guide the context of the prov...

  15. Sign Language Diglossia.

    Science.gov (United States)

    Stokoe, William C., Jr.

    Charles A. Ferguson's concept of "diglossia" (1959, 1964) is used in analyzing sign language. As in Haitian Creole or Swiss German, "two or more varieties" of sign language are "used by the same speakers under different conditions"--these are here called "High" (H) sign language and "Low" (L) sign language. H sign language is formally taught…

  16. Deaf children's engagement in an educational video in American Sign Language.

    Science.gov (United States)

    Golos, Debbie B

    2010-01-01

    Over time, children's educational television has successfully modified programming to incorporate research-based strategies to facilitate learning and engagement during viewing. However, research has been limited on whether these same strategies would work with preschool deaf children viewing videos in American Sign Language. In a descriptive study, engagement behaviors of 25 preschool deaf children during multiple viewings of an educational video in ASL were examined. The video incorporated research-based interactive strategies to facilitate engagement while targeting vocabulary through ASL, fingerspelling, and English print. Each of 3 viewing sessions was recorded; videos were transcribed and coded for frequency of children's movements, pointing, fingerspelling, and signing. Behaviors were analyzed for frequency within and across multiple viewings and by level of signing skills. It was found that each of these engagement behaviors occurred frequently throughout viewings and increased across multiple viewings regardless of a child's age or signing skills.

  17. Motion-sensitive cortex and motion semantics in American Sign Language.

    Science.gov (United States)

    McCullough, Stephen; Saygin, Ayse Pinar; Korpics, Franco; Emmorey, Karen

    2012-10-15

    Previous research indicates that motion-sensitive brain regions are engaged when comprehending motion semantics expressed by words or sentences. Using fMRI, we investigated whether such neural modulation can occur when the linguistic signal itself is visually dynamic and motion semantics is expressed by movements of the hands. Deaf and hearing users of American Sign Language (ASL) were presented with signed sentences that conveyed motion semantics ("The deer walked along the hillside.") or were static, conveying little or no motion ("The deer slept along the hillside."); sentences were matched for the amount of visual motion. Motion-sensitive visual areas (MT+) were localized individually in each participant. As a control, the Fusiform Face Area (FFA) was also localized for the deaf participants. The whole-brain analysis revealed static (locative) sentences engaged regions in left parietal cortex more than motion sentences, replicating previous results implicating these regions in comprehending spatial language for sign languages. Greater activation was observed in the functionally defined MT+ ROI for motion than static sentences for both deaf and hearing signers. No modulation of neural activity by sentence type was observed in the FFA. Deafness did not affect modulation of MT+ by motion semantics, but hearing signers exhibited stronger neural activity in MT+ for both sentence types, perhaps due to differences in exposure and/or use of ASL. We conclude that top down modulation of motion-sensitive cortex by linguistic semantics is not disrupted by the visual motion that is present in sign language sentences.

  18. American-sign-language statements and delay of gratification in hearing-impaired and nonhandicapped children.

    Science.gov (United States)

    Toner, I J; Ritchie, F K

    1984-04-01

    Hearing-impaired children were individually administered a task in which possession of accumulating candy rewards was made contingent upon the child's decision to stop any further accumulation of the candy. Hearing-impaired children, who under instruction periodically made American Sign Language (ASL) statements about the goodness of the reward, waited significantly longer before terminating the waiting period than did hearing-impaired children instructed to sign statements about the act of waiting and somewhat longer than did hearing-impaired children instructed to sign a neutral statement. Since the pattern of delay was unlike that reported in earlier investigations when nonhandicapped children verbalized similar statements and since variation in mode of communication did not influence delay in nonhandicapped children in the present investigation, the results were interpreted in terms of differences in cognitive controlling mechanisms between nonhandicapped and hearing-impaired children.

  19. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    Science.gov (United States)

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  20. Where to Look for American Sign Language (ASL) Sublexical Structure in the Visual World: Reply to Salverda (2016)

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2016-01-01

    In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…

  1. Evaluation of surface EMG features for the recognition of American Sign Language gestures.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios J; Panas, Stavros M

    2006-01-01

    In this work, analysis of the surface electromyogram (sEMG) signal is proposed for the recognition of American sign language (ASL) gestures. To this purpose, sixteen features are extracted from the sEMG signal acquired from the user's forearm, and evaluated by the Mahalanobis distance criterion. Discriminant analysis is used to reduce the number of features used in the classification of the signed ASL gestures. The proposed features are tested against noise resulting in a further reduced set of features, which are evaluated for their discriminant ability. The classification results reveal that 97.7% of the inspected ASL gestures were correctly recognized using sEMG-based features, providing a promising solution to the automatic ASL gesture recognition problem.

  2. American sign language and deaf culture competency of osteopathic medical students.

    Science.gov (United States)

    Lapinski, Jessica; Colonna, Caitlin; Sexton, Patricia; Richard, Mariah

    2015-01-01

    The study examined the effectiveness of a workshop on Deaf culture and basic medical American Sign Language for increasing osteopathic student physicians' confidence and knowledge when interacting with ASL-using patients. Students completed a pretest in which they provided basic demographic information, rated their confidence levels, took a video quiz on basic medical signs, and experienced a practical standardized encounter with a Deaf patient. They then attended a 4-hour workshop and, 2 weeks later, completed a posttest. Thirty-three students completed the pretest; 29 attended the workshop; 26 completed the posttest. Video quiz scores increased significantly from pretest to posttest, as did scores for the standardized patient encounter after completion of the workshop. Students also reported increased levels of confidence in interactions with the Deaf community. The results suggest that a single workshop was effective in increasing both confidence and short-term knowledge in interactions with Deaf patients.

  3. Automatic sign language identification

    OpenAIRE

    Gebre, B.G.; Wittenburg, P.; Heskes, T.

    2013-01-01

    We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages -- British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy...

  4. Potential Prometheus Effects of Sign Language as Research Language.

    Science.gov (United States)

    Mason, David G.

    1992-01-01

    This article promotes the utilization of Sign Language of the Deaf as a primary and secondary research language. The article discusses English as the traditional research language, the role of sign language in bilingualism, possible uses for American Sign Language (ASL) as a research language, and the availability of ASL-based literature for…

  5. American Sign Language Syntax and Analogical Reasoning Skills Are Influenced by Early Acquisition and Age of Entry to Signing Schools for the Deaf.

    Science.gov (United States)

    Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert

    2016-01-01

    Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.

  6. American Sign Language Syntax and Analogical Reasoning Skills Are Influenced by Early Acquisition and Age of Entry to Signing Schools for the Deaf

    Science.gov (United States)

    Henner, Jon; Caldwell-Harris, Catherine L.; Novogrodsky, Rama; Hoffmeister, Robert

    2016-01-01

    Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6–18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age. PMID:28082932

  7. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.

  8. ALPHABET RECOGNITION OF AMERICAN SIGN LANGUAGE: A HAND GESTURE RECOGNITION APPROACH USING SIFT ALGORITHM

    Directory of Open Access Journals (Sweden)

    Nachamai. M

    2013-01-01

    Full Text Available This paper is a sincere attempt to recognize english alphabets as part of hand gesture recognition, using the SIFT algorithm. The novelty of this approach is, it is a space, size, illumination and rotation invariant approach. The approach has evolved to work well with both the standard American Sign Language (ASL database and home-made database. The problem of alphabet recognition may seem to sound small but the intricacies involved in it cannot be solved using a single algorithm. Hand gesture recognition is a complicated task. A one stop solution is still not evolved for any recognition process. This paper has tried to approach this in a simple but efficient manner using the basic SIFT algorithm for recognition. The efficacy of the approach is proved well through the results obtained, invariably on both the datasets.

  9. Alphabet Recognition of American Sign Language : A Hand Gesture Recognition Approach Using Sift Algorithm

    Directory of Open Access Journals (Sweden)

    Nachamai. M

    2013-02-01

    Full Text Available This paper is a sincere attempt to recognize english alphabets as part of hand gesture recognition, usingthe SIFT algorithm. The novelty of this approach is, it is a space, size, illumination and rotation invariantapproach. The approach has evolved to work well with both the standard American Sign Language (ASLdatabase and home-made database. The problem of alphabet recognition may seem to sound small but theintricacies involved in it cannot be solved using a single algorithm. Hand gesture recognition is acomplicated task. A one stop solution is still not evolved for any recognition process. This paper has triedto approach this in a simple but efficient manner using the basic SIFT algorithm for recognition. Theefficacy of the approach is proved well through the results obtained, invariably on both the datasets.

  10. Development of American Sign Language Guidelines for K-12 Academic Assessments.

    Science.gov (United States)

    Higgins, Jennifer A; Famularo, Lisa; Cawthon, Stephanie W; Kurz, Christopher A; Reis, Jeanne E; Moers, Lori M

    2016-10-01

    The U.S. federal Every Student Succeeds Act (ESSA) was enacted with goals of closing achievement gaps and providing all students with access to equitable and high-quality instruction. One requirement of ESSA is annual statewide testing of students in grades 3-8 and once in high school. Some students, including many deaf or hard-of-hearing (D/HH) students, are eligible to use test supports, in the form of accommodations and accessibility tools, during state testing. Although technology allows accommodations and accessibility tools to be embedded within a digital assessment system, the success of this approach depends on the ability of test developers to appropriately represent content in accommodated forms. The Guidelines for Accessible Assessment Project (GAAP) sought to develop evidence- and consensus-based guidelines for representing test content in American Sign Language. In this article, we present an overview of GAAP, review of the literature, rationale, qualitative and quantitative research findings, and lessons learned.

  11. Longitudinal Receptive American Sign Language Skills Across a Diverse Deaf Student Body.

    Science.gov (United States)

    Beal-Alvarez, Jennifer S

    2016-04-01

    This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e., deaf with a disability). Years 1 through 4 included the ASL Receptive Skills Test (ASL-RST); Years 2 through 4 also included the Receptive Test of ASL (RT-ASL). Student performance for both measures positively correlated with age; deaf students with deaf parents scored higher than their same-age peers with hearing parents in some instances but not others; and those with a documented disability tended to score lower than their peers without disabilities. These results provide longitudinal findings across a diverse segment of the deaf/hard of hearing residential school population.

  12. American Sign Language/English bilingual model: a longitudinal study of academic growth.

    Science.gov (United States)

    Lange, Cheryl M; Lane-Outlaw, Susan; Lange, William E; Sherwood, Dyan L

    2013-10-01

    This study examines reading and mathematics academic growth of deaf and hard-of-hearing students instructed through an American Sign Language (ASL)/English bilingual model. The study participants were exposed to the model for a minimum of 4 years. The study participants' academic growth rates were measured using the Northwest Evaluation Association's Measure of Academic Progress assessment and compared with a national-normed group of grade-level peers that consisted primarily of hearing students. The study also compared academic growth for participants by various characteristics such as gender, parents' hearing status, and secondary disability status and examined the academic outcomes for students after a minimum of 4 years of instruction in an ASL/English bilingual model. The findings support the efficacy of the ASL/English bilingual model.

  13. Efficient generation of 3D hologram for American Sign Language using look-up table

    Science.gov (United States)

    Park, Joo-Sup; Kim, Seung-Cheol; Kim, Eun-Soo

    2010-02-01

    American Sign Language (ASL) is one of the languages giving the greatest help for communication of the hearing impaired person. Current 2-D broadcasting, 2-D movies are used the ASL to give some information, help understand the situation of the scene and translate the foreign language. These ASL will not be disappeared in future three-dimensional (3-D) broadcasting or 3-D movies because the usefulness of the ASL. On the other hands, some approaches for generation of CGH patterns have been suggested like the ray-tracing method and look-up table (LUT) method. However, these methods have some drawbacks that needs much time or needs huge memory size for look-up table. Recently, a novel LUT (N-LUT) method for fast generation of CGH patterns of 3-D objects with a dramatically reduced LUT without the loss of computational speed was proposed. Therefore, we proposed the method to efficiently generate the holographic ASL in holographic 3DTV or 3-D movies using look-up table method. The proposed method is largely consisted of five steps: construction of the LUT for each ASL images, extraction of characters in scripts or situation, call the fringe patterns for characters in the LUT for each ASL, composition of hologram pattern for 3-D video and hologram pattern for ASL and reconstruct the holographic 3D video with ASL. Some simulation results confirmed the feasibility of the proposed method in efficient generation of CGH patterns for ASL.

  14. The Development of Antonym Knowledge in American Sign Language (ASL) and Its Relationship to Reading Comprehension in English

    Science.gov (United States)

    Novogrodsky, Rama; Caldwell-Harris, Catherine; Fish, Sarah; Hoffmeister, Robert J.

    2014-01-01

    It is unknown if the developmental path of antonym knowledge in deaf children increases continuously with age and correlates with reading comprehension, as it does in hearing children. In the current study we tested 564 students aged 4-18 on a receptive multiple-choice American Sign Language (ASL) antonym test. A subgroup of 138 students aged 7-18…

  15. Deaf Families with Children Who Have Cochlear Implants: Perspectives and Beliefs on Bilingualism in American Sign Language and English

    Science.gov (United States)

    Mitchiner, Julie Cantrell

    2012-01-01

    This study examines Deaf parents with children who have cochlear implants on their beliefs and perspectives of bilingualism in American Sign Language and English using complementary mixed methods through surveys and follow-up interviews. Seventeen families participated in the survey and eight families continued their participation in semi-formal…

  16. Standardization of Sign Languages

    Science.gov (United States)

    Adam, Robert

    2015-01-01

    Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…

  17. British Sign Language.

    Science.gov (United States)

    Kyle, Jim; Woll, Bencie

    1981-01-01

    The author reports on the use of British Sign Language in the United Kingdom and dispels some myths that surround the language. It is pointed out that there is a low level of interest in deaf people and their method of communication. Research needs in the area of sign language systems are briefly considered. (SB)

  18. Reading books with young deaf children: strategies for mediating between American Sign Language and English.

    Science.gov (United States)

    Berke, Michele

    2013-01-01

    Research on shared reading has shown positive results on children's literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children's visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children's literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques

  19. Robust Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using Range Camera

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2012-07-01

    The automatic interpretation of human gestures can be used for a natural interaction with computers without the use of mechanical devices such as keyboards and mice. The recognition of hand postures have been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem even with the use of 2D images. The objective of the current study is to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. An heuristic and voxelbased signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process and the tracking procedure have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 98.24% recognition rate after testing 12723 samples of 12 gestures taken from the alphabet of the American Sign Language.

  20. Quality versus intelligibility: studying human preferences for American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2011-03-01

    Real-time videoconferencing using cellular devices provides natural communication to the Deaf community. For this application, compressed American Sign Language (ASL) video must be evaluated in terms of the intelligibility of the conversation and not in terms of the overall aesthetic quality of the video. This work presents a paired comparison experiment to determine the subjective preferences of ASL users in terms of the trade-off between intelligibility and quality when varying the proportion of the bitrate allocated explicitly to the regions of the video containing the signer. A rate-distortion optimization technique, which jointly optimizes a quality criteria and an intelligibility criteria according to a user-specified parameter, generates test video pairs for the subjective experiment. Experimental results suggest that at sufficiently high bitrates, all users prefer videos in which the non-signer regions in the video are encoded with some nominal rate. As the total encoding bitrate decreases, users generally prefer video in which a greater proportion of the rate is allocated to the signer. The specific operating points preferred in the quality-intelligibility trade-off vary with the demographics of the users.

  1. Introducing Sign Language Systems to Parents of Young Deaf Children.

    Science.gov (United States)

    Moser, Barbara Walsh

    1987-01-01

    The three major sign language systems (American Sign Language, Pidgin Sign English, and Manual English) are compared in table form. A brief description of each language highlights salient points that parents of deaf children need to understand. (DB)

  2. Sign Language Tutoring Tool

    CERN Document Server

    Aran, Oya; Benoit, Alexandre; Carrillo, Ana Huerta; Fanard, François-Xavier; Campr, Pavel; Akarun, Lale; Caplier, Alice; Rombaut, Michele; Sankur, Bulent

    2008-01-01

    In this project, we have developed a sign language tutor that lets users learn isolated signs by watching recorded videos and by trying the same signs. The system records the user's video and analyses it. If the sign is recognized, both verbal and animated feedback is given to the user. The system is able to recognize complex signs that involve both hand gestures and head movements and expressions. Our performance tests yield a 99% recognition rate on signs involving only manual gestures and 85% recognition rate on signs that involve both manual and non manual components, such as head movement and facial expressions.

  3. The neural correlates of spatial language in English and American Sign Language: a PET study with hearing bilinguals.

    Science.gov (United States)

    Emmorey, Karen; Grabowski, Thomas; McCullough, Stephen; Ponto, Laura L B; Hichwa, Richard D; Damasio, Hanna

    2005-02-01

    Rather than specifying spatial relations with a closed-class set of prepositions, American Sign Language (ASL) encodes spatial relations using space itself via classifier constructions. In these constructions, handshape morphemes specify object type, and the position of the hands in signing space schematically represents the spatial relation between objects. A [15O]water PET study was conducted to investigate the neural regions engaged during the production of English prepositions and ASL locative classifier constructions in hearing subjects with deaf parents (ASL-English bilinguals). Ten subjects viewed line drawings depicting a spatial relation between two objects and were asked to produce either an ASL locative classifier construction or an English preposition that described the spatial relation. The comparison task was to name the figure object (colored red) in either ASL or in English. Describing spatial relations in either ASL or English engaged parietal cortex bilaterally. However, an interaction analysis revealed that right superior parietal cortex was engaged to a greater extent for ASL than for English. We propose that right parietal cortex is involved in the visual-motoric transformation required for ASL. The production of both English prepositions and ASL nouns engaged Broca's area to a greater extent than ASL classifier constructions. We suggest that Broca's area is not engaged because these constructions do not involve retrieval of the name of an object or the name of a spatial relation. Finally, under the same task conditions, only left parietal activation was observed for monolingual English speakers producing spatial prepositions (H. Damasio et al., 2001, NeuroImage, 13). We conclude that the right hemisphere activation observed for ASL-English bilinguals was due to their life-long experience with spatial language in ASL.

  4. Discriminant features and temporal structure of nonmanuals in American Sign Language.

    Science.gov (United States)

    Benitez-Quiroz, C Fabian; Gökgöz, Kadir; Wilbur, Ronnie B; Martinez, Aleix M

    2014-01-01

    To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic-computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences--Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions--plus their polarities--positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.

  5. Designing an American Sign Language Avatar for Learning Computer Science Concepts for Deaf or Hard-of-Hearing Students and Deaf Interpreters

    Science.gov (United States)

    Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia

    2013-01-01

    The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…

  6. Sign Language Advantage.

    Science.gov (United States)

    Daniels, Marilyn

    2001-01-01

    Describes Sign in Education, a pilot program in the United Kingdom that integrated Deaf children and hearing children in a hearing classroom with a culturally Deaf teacher who taught the national curriculum in British Sign Language one afternoon a week. Explores the advantage to the Deaf community, as well as the majority culture of adopting such…

  7. Bilingualism and attention: a study of balanced and unbalanced bilingual deaf users of American Sign Language and English.

    Science.gov (United States)

    Kushalnagar, Poorna; Hannay, H Julia; Hernandez, Arturo E

    2010-01-01

    Early deafness is thought to affect low-level sensorimotor processing such as selective attention, whereas bilingualism is thought to be strongly associated with higher order cognitive processing such as attention switching under cognitive load. This study explores the effects of bimodal-bilingualism (in American Sign Language and written English) on attention switching, in order to contrast the roles of bilingual proficiency and age of acquisition in relation to cognitive flexibility among deaf adults. Results indicated a strong high-proficiency bilingual advantage in the higher order attention task. The level of proficiency in 2 languages appears to be the driving force for cognitive flexibility. However, additional data are needed to reach conclusive interpretation for the influence of age of second language acquisition on higher order attention-switching ability and associated cognitive flexibility.

  8. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  9. Flemish Sign Language Standardisation

    Science.gov (United States)

    Van Herreweghe, Mieke; Vermeerbergen, Myriam

    2009-01-01

    In 1997, the Flemish Deaf community officially rejected standardisation of Flemish Sign Language. It was a bold choice, which at the time was not in line with some of the decisions taken in the neighbouring countries. In this article, we shall discuss the choices the Flemish Deaf community has made in this respect and explore why the Flemish Deaf…

  10. Creating Learning Objects to Enhance the Educational Experiences of American Sign Language Learners: An Instructional Development Report

    Directory of Open Access Journals (Sweden)

    Simone Conceição

    2002-10-01

    Full Text Available Little attention has been given to involving the deaf community in distance teaching and learning or in designing courses that relate to their language and culture. This article reports on the design and development of video-based learning objects created to enhance the educational experiences of American Sign Language (ASL hearing participants in a distance learning course and, following the course, the creation of several new applications for use of the learning objects. The learning objects were initially created for the web, as a course component for review and rehearsal. The value of the web application, as reported by course participants, led us to consider ways in which the learning objects could be used in a variety of delivery formats: CD-ROM, web-based knowledge repository, and handheld device. The process to create the learning objects, the new applications, and lessons learned are described.

  11. Preservice Teacher and Interpreter American Sign Language Abilities: Self-Evaluations and Evaluations of Deaf Students' Narrative Renditions.

    Science.gov (United States)

    Beal-Alvarez, Jennifer S; Scheetz, Nanci A

    2015-01-01

    In deaf education , the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and expressive narrative sign language skills of 10 teacher and interpreter candidates in a university preparation program. The candidates evaluated signed renditions of two signing children, as well as their own expressive renditions, using the Signed Reading Fluency Rubric (Easterbrooks & Huston, 2008) at the completion of their fifth sign language course. Candidates' evaluations were compared overall and across 12 sign language indicators to ratings of two university program professors. Some variation existed across ratings for individual indicators, but generally the candidates were aware of and could accurately rate their own abilities and those of two signing children.

  12. Sign language typology: The contribution of rural sign languages

    NARCIS (Netherlands)

    C. de Vos; R. Pfau

    2014-01-01

    Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western w

  13. Sign Language Comprehension: The Case of Spanish Sign Language

    Science.gov (United States)

    Rodriguez Ortiz, I. R.

    2008-01-01

    This study aims to answer the question, how much of Spanish Sign Language interpreting deaf individuals really understand. Study sampling included 36 deaf people (deafness ranging from severe to profound; variety depending on the age at which they learned sign language) and 36 hearing people who had good knowledge of sign language (most were…

  14. Issues in Sign Language Lexicography

    DEFF Research Database (Denmark)

    Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2013-01-01

    ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...... as a review of a sign language dictionary that has recently been published in the Netherlands...

  15. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  16. American Sign Language and Academic English: Factors Influencing the Reading of Bilingual Secondary School Deaf and Hard of Hearing Students.

    Science.gov (United States)

    Scott, Jessica A; Hoffmeister, Robert J

    2017-01-01

    For many years, researchers have sought to understand the reading development of deaf and hard of hearing (DHH) students. Guided by prior research on DHH and hearing students, in this study we investigate the hypothesis that for secondary school DHH students enrolled in American Sign Language (ASL)/English bilingual schools for the deaf, academic English proficiency would be a significant predictor of reading comprehension alongside ASL proficiency. Using linear regression, we found statistically significant interaction effects between academic English knowledge and word reading fluency in predicting the reading comprehension scores of the participants. However, ASL remained the strongest and most consistent predictor of reading comprehension within the sample. Findings support a model in which socio-demographic factors, ASL proficiency, and word reading fluency are primary predictors of reading comprehension for secondary DHH students.

  17. Gaze patterns during identity and emotion judgments in hearing adults and deaf users of American Sign Language.

    Science.gov (United States)

    Letourneau, Susan M; Mitchell, Teresa V

    2011-01-01

    Deaf individuals rely on facial expressions for emotional, social, and linguistic cues. In order to test the hypothesis that specialized experience with faces can alter typically observed gaze patterns, twelve hearing adults and twelve deaf, early-users of American Sign Language judged the emotion and identity of expressive faces (including whole faces, and isolated top and bottom halves), while accuracy and fixations were recorded. Both groups recognized individuals more accurately from top than bottom halves, and emotional expressions from bottom than top halves. Hearing adults directed the majority of fixations to the top halves of faces in both tasks, but fixated the bottom half slightly more often when judging emotion than identity. In contrast, deaf adults often split fixations evenly between the top and bottom halves regardless of task demands. These results suggest that deaf adults have habitual fixation patterns that may maximize their ability to gather information from expressive faces.

  18. Using a Signed Language as a Second Language for Kindergarten Students.

    Science.gov (United States)

    Daniels, Marilyn

    2003-01-01

    Summarizes research demonstrating advantages of using British Sign Language, Italian Sign Language, and American Sign Language (ASL) as a second language with young children. Reports a qualitative study to determine whether American kindergartners can achieve bilingual ability in English and ASL in one academic year through exposure to a native…

  19. Planning Sign Languages: Promoting Hearing Hegemony? Conceptualizing Sign Language Standardization

    Science.gov (United States)

    Eichmann, Hanna

    2009-01-01

    In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…

  20. Mobile Sign Language Learning Outside the Classroom

    Science.gov (United States)

    Weaver, Kimberly A.; Starner, Thad

    2012-01-01

    The majority of deaf children in the United States are born to hearing parents with limited prior exposure to American Sign Language (ASL). Our research involves creating and validating a mobile language tool called SMARTSign. The goal is to help hearing parents learn ASL in a way that fits seamlessly into their daily routine. (Contains 3 figures.)

  1. Kinship in Mongolian Sign Language

    Science.gov (United States)

    Geer, Leah

    2011-01-01

    Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…

  2. Sign Languages of the World

    DEFF Research Database (Denmark)

    This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...... or team of experts in the particular sign language, at the invitation of the editors. Thirty-eight different deaf sign languages and alternate sign languages from every continent are represented, and over seventy international deaf and hearing scholars have contributed to the volume....

  3. SIGN LANGUAGE RECOGNITION USING THINNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    S. N. Omkar

    2011-08-01

    Full Text Available In the recent years many approaches have been made that uses computer vision algorithms to interpret sign language. This endeavour is yet another approach to accomplish interpretation of human hand gestures. The first step of this work is background subtraction which achieved by the Euclidean distance threshold method. Thinning algorithm is then applied to obtain a thinned image of the human hand for further analysis. The different feature points which include terminating points and curved edges are extracted for the recognition of the different signs. The input for the project is taken from video data of a human hand gesturing all the signs of the American Sign Language.

  4. Sign language perception research for improving automatic sign language recognition

    Science.gov (United States)

    ten Holt, Gineke A.; Arendsen, Jeroen; de Ridder, Huib; Koenderink-van Doorn, Andrea J.; Reinders, Marcel J. T.; Hendriks, Emile A.

    2009-02-01

    Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of data-driven ASLR methods has shortcomings which may not be solvable without the use of knowledge on human sign language processing. Handling variation in the precise execution of signs is an example of such shortcomings: data-driven methods (which include almost all current methods) have difficulty recognizing signs that deviate too much from the examples that were used to train the method. Insight into human sign processing is needed to solve these problems. Perceptual research on sign language can provide such insights. This paper discusses knowledge derived from a set of sign perception experiments, and the application of such knowledge in ASLR. Among the findings are the facts that not all phases and elements of a sign are equally informative, that defining the 'correct' form for a sign is not trivial, and that statistical ASLR methods do not necessarily arrive at sign representations that resemble those of human beings. Apparently, current ASLR methods are quite different from human observers: their method of learning gives them different sign definitions, they regard each moment and element of a sign as equally important and they employ a single definition of 'correct' for all circumstances. If the object is for an ASLR method to handle natural sign language, then the insights from sign perception research must be integrated into ASLR.

  5. The Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    The entries of the The Danish Sign Language Dictionary have four sections:  Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons.  Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through:  Handshape: Particular handshapes for the active and the passive hand can be specified....... There are 65 searchable handshapes.  Location: Location is chosen from a page with 15 location icons, representing locations on or near the body.  Text: Text searches are performed both on Danish equivalents, sign glosses and example sentences (both transcriptions and translations). This enables users...

  6. Towards real-time and rotation-invariant American Sign Language alphabet recognition using a range camera.

    Science.gov (United States)

    Lahamy, Hervé; Lichti, Derek D

    2012-10-29

    The automatic interpretation of human gestures can be used for a natural interaction with computers while getting rid of mechanical devices such as keyboards and mice. In order to achieve this objective, the recognition of hand postures has been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem, even with the use of 2D images. The objective of the current study was to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. A heuristic and voxel-based signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process, the tracking procedure and the segmentation algorithm have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 93.88% recognition rate after testing 14,732 samples of 12 postures taken from the alphabet of the American Sign Language.

  7. Towards Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using a Range Camera

    Directory of Open Access Journals (Sweden)

    Derek D. Lichti

    2012-10-01

    Full Text Available The automatic interpretation of human gestures can be used for a natural interaction with computers while getting rid of mechanical devices such as keyboards and mice. In order to achieve this objective, the recognition of hand postures has been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem, even with the use of 2D images. The objective of the current study was to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. A heuristic and voxel-based signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process, the tracking procedure and the segmentation algorithm have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 93.88% recognition rate after testing 14,732 samples of 12 postures taken from the alphabet of the American Sign Language.

  8. A Re-examination of Sign Language Diglossia.

    Science.gov (United States)

    American Annals of the Deaf, 1983

    1983-01-01

    An examination of C. Ferguson's characteristics of diglossia (function, prestige, literary heritage, acquisition, standardization, stability, grammar, lexicon, and phonology) questions the assertion that American Sign Language is inferior to signed English. (CL)

  9. Signs of Change: Contemporary Attitudes to Australian Sign Language

    Science.gov (United States)

    Slegers, Claudia

    2010-01-01

    This study explores contemporary attitudes to Australian Sign Language (Auslan). Since at least the 1960s, sign languages have been accepted by linguists as natural languages with all of the key ingredients common to spoken languages. However, these visual-spatial languages have historically been subject to ignorance and myth in Australia and…

  10. Compiling a Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... time you encounter a specific problem while describing a sign, and that enables the lexicographer to consistently answer questions like "what is the base form of this sign?", "how many meanings does this sign have?", "are these two forms two different meanings of the same polysemous sign...... – or are they homonyms?" and so on. Very often such questions demand further research and can't be answered sufficiently through a simple standard formula. Therefore lexicographic work often seems like an endless series of compromises. Another source of compromise arises when you set out to decide which information...

  11. Lexical Frequency in Sign Languages

    Science.gov (United States)

    Johnston, Trevor

    2012-01-01

    Measures of lexical frequency presuppose the existence of corpora, but true machine-readable corpora of sign languages (SLs) are only now being created. Lexical frequency ratings for SLs are needed because there has been a heavy reliance on the interpretation of results of psycholinguistic and neurolinguistic experiments in the SL research…

  12. Numeral Incorporation in Japanese Sign Language

    Science.gov (United States)

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  13. The Legal Recognition of Sign Languages

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…

  14. Sign language aphasia from a neurodegenerative disease.

    Science.gov (United States)

    Falchook, Adam D; Mayberry, Rachel I; Poizner, Howard; Burtis, David Brandon; Doty, Leilani; Heilman, Kenneth M

    2013-01-01

    While Alois Alzheimer recognized the effects of the disease he described on speech and language in his original description of the disease in 1907, the effects of Alzheimer's disease (AD) on language in deaf signers has not previously been reported. We evaluated a 55-year-old right-handed congenitally deaf woman with a 2-year history of progressive memory loss and a deterioration of her ability to communicate in American Sign Language, which she learned at the age of eight. Examination revealed that she had impaired episodic memory as well as marked impairments in the production and comprehension of fingerspelling and grammatically complex sentences. She also had signs of anomia as well as an ideomotor apraxia and visual-spatial dysfunction. This report illustrates the challenges in evaluation of a patient for the presence of degenerative dementia when the person is deaf from birth, uses sign language, and has a late age of primary language acquisition. Although our patient could neither speak nor hear, in many respects her cognitive disorders mirror those of patients with AD who had normally learned to speak.

  15. Intimate partner violence reported by two samples of deaf adults via a computerized American sign language survey.

    Science.gov (United States)

    Pollard, Robert Q; Sutter, Erika; Cerulli, Catherine

    2014-03-01

    A computerized sign language survey was administered to two large samples of deaf adults. Six questions regarding intimate partner violence (IPV) were included, querying lifetime and past-year experiences of emotional abuse, physical abuse, and forced sex. Comparison data were available from a telephone survey of local households. Deaf respondents reported high rates of emotional abuse and much higher rates of forced sex than general population respondents. Physical abuse rates were comparable between groups. More men than women in both deaf samples reported past-year physical and sexual abuse. Past-year IPV was associated with higher utilization of hospital emergency services. Implications for IPV research, education, and intervention in the Deaf community are discussed.

  16. On the System of Person-Denoting Signs in Estonian Sign Language: Estonian Name Signs

    Science.gov (United States)

    Paales, Liina

    2010-01-01

    This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…

  17. Dictionaries of African Sign Languages: An Overview

    Science.gov (United States)

    Schmaling, Constanze H.

    2012-01-01

    This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…

  18. Sign Language Planning: Pragmatism, Pessimism and Principles

    Science.gov (United States)

    Turner, Graham H.

    2009-01-01

    This article introduces the present collection of sign language planning studies. Contextualising the analyses against the backdrop of core issues in the theory of language planning and the evolution of applied sign linguistics, it is argued that--while the sociolinguistic circumstances of signed languages worldwide can, in many respects, be…

  19. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  20. The Use of Sign Language Pronouns by Native-Signing Children with Autism

    Science.gov (United States)

    Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen

    2015-01-01

    We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…

  1. Language Policy and Planning: The Case of Italian Sign Language

    Science.gov (United States)

    Geraci, Carlo

    2012-01-01

    Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…

  2. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  3. A study of the tactual reception of sign language.

    Science.gov (United States)

    Reed, C M; Delhorne, L A; Durlach, N I; Fischer, S D

    1995-04-01

    One of the natural methods of tactual communication in common use among individuals who are both deaf and blind is the tactual reception of sign language. In this method, the receiver (who is deaf-blind) places a hand (or hands) on the dominant (or both) hand(s) of the signer in order to receive, through the tactual sense, the various formational properties associated with signs. In the study reported here, 10 experienced deaf-blind users of either American Sign Language (ASL) or Pidgin Sign English (PSE) participated in experiments to determine their ability to receive signed materials including isolated signs and sentences. A set of 122 isolated signs was received with an average accuracy of 87% correct. The most frequent type of error made in identifying isolated signs was related to misperception of individual phonological components of signs. For presentation of signed sentences (translations of the English CID sentences into ASL or PSE), the performance of individual subjects ranged from 60-85% correct reception of key signs. Performance on sentences was relatively independent of rate of presentation in signs/sec, which covered a range of roughly 1 to 3 signs/sec. Sentence errors were accounted for primarily by deletions and phonological and semantic/syntactic substitutions. Experimental results are discussed in terms of differences in performance for isolated signs and sentences, differences in error patterns for the ASL and PSE groups, and communication rates relative to visual reception of sign language and other natural methods of tactual communication.

  4. A Sign Language to Text Converter Using Leap Motion

    OpenAIRE

    Fazlur Rahman Khan; Huey Fang Ong; Nurhidayah Bahar

    2016-01-01

    This paper presents a prototype that can convert sign language into text. A Leap Motion controller was utilised as an interface for hand motion tracking without the need of wearing any external instruments. Three recognition techniques were employed to measure the performance of the prototype, namely the Geometric Template Matching, Artificial Neural Network and Cross Correlation. 26 alphabets from American Sign Language were chosen for training and testing the proposed prototype. The experim...

  5. Grandfather Moose: Sign Language Nursery Rhymes.

    Science.gov (United States)

    Hamilton, Harley

    1987-01-01

    "Grandfather Moose" rhymes, written to follow the Mother Goose tradition, are short, appealing, easy-to-memorize sign language nursery rhymes which employ visual poetic devices such as similar signs and transitional flow of movement. (CB)

  6. An electronic dictionary of Danish Sign Language

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2008-01-01

    Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....

  7. Language Impairments in Sign Language: Breakthroughs and Puzzles

    Science.gov (United States)

    Morgan, Gary; Herman, Rosalind; Woll, Bencie

    2007-01-01

    Background: Specific language impairment has previously solely been documented for children acquiring spoken languages, despite informal reports of deaf children with possible sign language disorder. The paper reports the case of a deaf child exposed to British Sign Language (BSL) from birth, who has significant developmental deficits in the…

  8. Sign order in Slovenian Sign Language locative constructions

    Directory of Open Access Journals (Sweden)

    Matic Pavlič

    2016-12-01

    Full Text Available In both sign and spoken languages, locative relations tend to be encoded within constructions that display the non-basic word/sign order. In addition, in such an environment, sign languages habitually use a distinct predicate type – a classifier predicate – which may independently affect the order of constituents in the sentence. In this paper, I present Slovenian Sign Language (SZJ locative constructions, in which (i the argument that enables spatial anchoring (“ground” precedes both the argument that requires spatial anchoring (“figure” and the predicate. At the same time, (ii the relative order of the figure with respect to the predicate depends on the type of predicate employed: a non-classifier predicate precedes the figure, while a classifier predicate only comes after the figure.

  9. Discriminative exemplar coding for sign language recognition with Kinect.

    Science.gov (United States)

    Sun, Chao; Zhang, Tianzhu; Bao, Bing-Kun; Xu, Changsheng; Mei, Tao

    2013-10-01

    Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.

  10. Signed Language Working Memory Capacity of Signed Language Interpreters and Deaf Signers

    Science.gov (United States)

    Wang, Jihong; Napier, Jemina

    2013-01-01

    This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…

  11. Numeral Variation in New Zealand Sign Language

    Science.gov (United States)

    McKee, David; McKee, Rachel; Major, George

    2011-01-01

    Lexical variation abounds in New Zealand Sign Language (NZSL) and is commonly associated with the introduction of the Australasian Signed English lexicon into Deaf education in 1979, before NZSL was acknowledged as a language. Evidence from dictionaries of NZSL collated between 1986 and 1997 reveal many coexisting variants for the numbers from one…

  12. Historical Development of Hong Kong Sign Language

    Science.gov (United States)

    Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny

    2013-01-01

    This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…

  13. Research Ethics in Sign Language Communities

    Science.gov (United States)

    Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.

    2009-01-01

    Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…

  14. Sign Language and the Brain: A Review

    Science.gov (United States)

    Campbell, Ruth; MacSweeney, Mairead; Waters, Dafydd

    2008-01-01

    How are signed languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign language processing…

  15. LSE-Sign: A lexical database for Spanish Sign Language.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  16. A tour in sign language

    CERN Multimedia

    François Briard

    2016-01-01

    In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration.   On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...

  17. Working memory, deafness and sign language.

    Science.gov (United States)

    Rudner, Mary; Andin, Josefine; Rönnberg, Jerker

    2009-10-01

    Working memory (WM) for sign language has an architecture similar to that for speech-based languages at both functional and neural levels. However, there are some processing differences between language modalities that are not yet fully explained, although a number of hypotheses have been mooted. This article reviews some of the literature on differences in sensory, perceptual and cognitive processing systems induced by auditory deprivation and sign language use and discusses how these differences may contribute to differences in WM architecture for signed and speech-based languages. In conclusion, it is suggested that left-hemisphere reorganization of the motion-processing system as a result of native sign-language use may interfere with the development of the order processing system in WM.

  18. The Uniformity and Diversity of Language: Evidence from Sign Language

    Science.gov (United States)

    Sandler, Wendy

    2010-01-01

    Evidence from sign language strongly supports three positions: (1) language is a coherent system with universal properties; (2) sign languages diverge from spoken languages in some aspects of their structure; and (3) domain-external factors can be identified that account for some crucial aspects of language structure -- uniform and diverse -- in both modalities. Assuming that any of these positions excludes the others defeats the purpose of the enterprise. PMID:21076645

  19. Cooperative Sign Language Tutoring: A Multiagent Approach

    Science.gov (United States)

    Yıldırım, Ilker; Aran, Oya; Yolum, Pınar; Akarun, Lale

    Sign languages can be learned effectively only with frequent feedback from an expert in the field. The expert needs to watch a performed sign, and decide whether the sign has been performed well based on his/her previous knowledge about the sign. The expert's role can be imitated by an automatic system, which uses a training set as its knowledge base to train a classifier that can decide whether the performed sign is correct. However, when the system does not have enough previous knowledge about a given sign, the decision will not be accurate. Accordingly, we propose a multiagent architecture in which agents cooperate with each other to decide on the correct classification of performed signs. We apply different cooperation strategies and test their performances in varying environments. Further, through analysis of the multiagent system, we can discover inherent properties of sign languages, such as the existence of dialects.

  20. Second Language Working Memory Deficits and Plasticity in Hearing Bimodal Learners of Sign Language

    Directory of Open Access Journals (Sweden)

    Williams Joshua

    2015-10-01

    Full Text Available Little is known about the acquisition of another language modality on second language (L2 working memory (WM capacity. Differential indexing within the WM system based on language modality may explain differences in performance on WM tasks in sign and spoken language. We investigated the effect of language modality (sign versus spoken on L2 WM capacity. Results indicated reduced L2 WM span relative to first language span for both L2 learners of Spanish and American Sign Language (ASL. Importantly, ASL learners had lower L2 WM spans than Spanish learners. Additionally, ASL learners increased their L2 WM spans as a function of proficiency, whereas Spanish learners did not. This pattern of results demonstrated that acquiring another language modality disadvantages ASL learners. We posited that this disadvantage arises out of an inability to correctly and efficiently allocate linguistic information to the visuospatial sketchpad due to L1-related indexing bias.

  1. A REVIEW ON THE DEVELOPMENT OF INDONESIAN SIGN LANGUAGE RECOGNITION SYSTEM

    Directory of Open Access Journals (Sweden)

    Sutarman

    2013-01-01

    Full Text Available Sign language is mainly employed by hearing-impaired people to communicate with each other. However, communication with normal people is a major handicap for them since normal people do not understand their sign language. Sign language recognition is needed for realizing a human oriented interactive system that can perform an interaction like normal communication. Sign language recognition basically uses two approaches: (1 computer vision-based gesture recognition, in which a camera is used as input and videos are captured in the form of video files stored before being processed using image processing; (2 approach based on sensor data, which is done by using a series of sensors that are integrated with gloves to get the motion features finger grooves and hand movements. Different of sign languages exist around the world, each with its own vocabulary and gestures. Some examples are American Sign Language (ASL, Chinese Sign Language (CSL, British Sign Language (BSL, Indonesian Sign Language (ISL and so on. The structure of Indonesian Sign Language (ISL is different from the sign language of other countries, in that words can be formed from the prefix and or suffix. In order to improve recognition accuracy, researchers use methods, such as the hidden Markov model, artificial neural networks and dynamic time warping. Effective algorithms for segmentation, matching the classification and pattern recognition have evolved. The main objective of this study is to review the sign language recognition methods in order to choose the best method for developing the Indonesian sign language recognition system.

  2. Arabic Alphabet and Numbers Sign Language Recognition

    Directory of Open Access Journals (Sweden)

    Mahmoud Zaki Abdo

    2015-11-01

    Full Text Available This paper introduces an Arabic Alphabet and Numbers Sign Language Recognition (ArANSLR. It facilitates the communication between the deaf and normal people by recognizing the alphabet and numbers signs of Arabic sign language to text or speech. To achieve this target, the system able to visually recognize gestures from hand image input. The proposed algorithm uses hand geometry and the different shape of a hand in each sign for classifying letters shape by using Hidden Markov Model (HMM. Experiments on real-world datasets showed that the proposed algorithm for Arabic alphabet and numbers sign language recognition is suitability and reliability compared with other competitive algorithms. The experiment results show that the increasing of the gesture recognition rate depends on the increasing of the number of zones by dividing the rectangle surrounding the hand.

  3. Phonological "Deviance" in British Sign Language Poetry.

    Science.gov (United States)

    Sutton-Spence, Rachel

    2001-01-01

    Focuses on the phonological deviance of the poetry of Dorothy Miles, who composed her work in both British Sign Language and English. Analysis is based on three poems performed by Miles herself. (Author/VWL)

  4. A Sign Language to Text Converter Using Leap Motion

    Directory of Open Access Journals (Sweden)

    Fazlur Rahman Khan

    2016-12-01

    Full Text Available This paper presents a prototype that can convert sign language into text. A Leap Motion controller was utilised as an interface for hand motion tracking without the need of wearing any external instruments. Three recognition techniques were employed to measure the performance of the prototype, namely the Geometric Template Matching, Artificial Neural Network and Cross Correlation. 26 alphabets from American Sign Language were chosen for training and testing the proposed prototype. The experimental results showed that Geometric Template Matching achieved the highest recognition accuracy compared to the other recognition techniques.

  5. Exploring Power & Ethnocentrism in Sign language Translation

    Science.gov (United States)

    Leneham, Marcel

    2007-01-01

    This article demonstrates that theories intended to prevent ethnocentric influence for one pair of languages may, in fact, be the catalyst for the phenomenon it purports to prevent in another pair. While it explores the issue in relation to sign language translation, the article raises the question of whether the findings can be extrapolated to…

  6. Approaching Sign Language Test Construction: Adaptation of the German Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…

  7. Spoken english language development among native signing children with cochlear implants.

    Science.gov (United States)

    Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah

    2014-04-01

    Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken English after implantation. We compare their language skills with hearing ASL/English bilingual children of deaf parents. Our results show comparable English scores for the CI and hearing groups on a variety of standardized language measures, exceeding previously reported scores for children with CIs with the same age of implantation and years of CI use. We conclude that natural sign language input does no harm and may mitigate negative effects of early auditory deprivation for spoken language development.

  8. Sign Language Benefits Tibetan Deaf-mutes

    Institute of Scientific and Technical Information of China (English)

    SUO QIONG; SUN WENZHEN

    2007-01-01

    @@ There are in Tibet Autonomous Region 190,000 disabled persons,including more than 30,000 who are deaf-mutes or are hearing impaired.In the Tibetan language,a word is often expressed with different signs.This poses a serious handicap for communication and exchanges among Tibetan deaf-mutes and their effort to participate in social activities.The ongoing research and development of a Tibetan sign language is expected to get rid of that handicap and allow Tibetan deaf-mutes to lead a normal life.

  9. Legal and Ethical Imperatives for Using Certified Sign Language Interpreters in Health Care Settings: How to "Do No Harm" When "It's (All) Greek" (Sign Language) to You.

    Science.gov (United States)

    Nonaka, Angela M

    2016-09-01

    Communication obstacles in health care settings adversely impact patient-practitioner interactions by impeding service efficiency, reducing mutual trust and satisfaction, or even endangering health outcomes. When interlocutors are separated by language, interpreters are required. The efficacy of interpreting, however, is constrained not just by interpreters' competence but also by health care providers' facility working with interpreters. Deaf individuals whose preferred form of communication is a signed language often encounter communicative barriers in health care settings. In those environments, signing Deaf people are entitled to equal communicative access via sign language interpreting services according to the Americans with Disabilities Act and Executive Order 13166, the Limited English Proficiency Initiative. Yet, litigation in states across the United States suggests that individual and institutional providers remain uncertain about their legal obligations to provide equal communicative access. This article discusses the legal and ethical imperatives for using professionally certified (vs. ad hoc) sign language interpreters in health care settings. First outlining the legal terrain governing provision of sign language interpreting services, the article then describes different types of "sign language" (e.g., American Sign Language vs. manually coded English) and different forms of "sign language interpreting" (e.g., interpretation vs. transliteration vs. translation; simultaneous vs. consecutive interpreting; individual vs. team interpreting). This is followed by reviews of the formal credentialing process and of specialized forms of sign language interpreting-that is, certified deaf interpreting, trilingual interpreting, and court interpreting. After discussing practical steps for contracting professional sign language interpreters and addressing ethical issues of confidentiality, this article concludes by offering suggestions for working more effectively

  10. Attitudes of Deaf Adults Regarding Preferred Sign Language Systems Used in the Classroom with Deaf Students.

    Science.gov (United States)

    Kautzky-Bowden, Sally M.; Gonzales, B. Robert

    1987-01-01

    A questionnaire survey assessing attitudes of 50 deaf adults toward sign language systems used in schools found the majority supported American Sign Language and Manually Coded English-Pidgin with some reservations. Respondents were also concerned about needs of individual deaf children and deaf adult involvement in educational decision making for…

  11. 认知视角下的美国手语象似性隐喻特征探究%A Study on the Iconic Metaphors in American Sign Language from a Cognitive Perspective

    Institute of Scientific and Technical Information of China (English)

    白彬; 国华; 周聪聪

    2013-01-01

      隐喻不仅是一种语言现象,更是人类对世界的一种认知方式。它是人类将对某一事物的理解和认识映射到另一种事物上,通过相似点和象似性将两种事物联系起来,用产生的隐喻意义来理解另一种事物。手语作为一种特殊的人类语言,在表达时也体现出了隐喻现象。聋人可以通过隐喻将一些抽象的事物用手势表达出具体的含义,丰富并扩大手语的交流。从认知语言学的角度对美国手语进行研究,分析美国手语的隐喻现象及其特征,探究美国手语隐喻表达区别于有声语言隐喻表达的独有特点,即象似性和隐喻性相结合产生的双映射隐喻表达特征,旨在通过对美国手语的认知隐喻分析和研究,以促进在认知领域下对美国手语更为深入地研究和探索。%Metaphor is not only a language phenomenon ,but also a cognitive way for human beings to learn the world .It’s a way we understand things in the world by connecting similar points ironically .As a spe‐cial human language ,sign language also demonstrates such a phenomenon of metaphor .Deaf people can explain some abstract things through metaphors by hands to express specific meanings .It enriches and ex‐pands the communication of sign language .By studying American Sign Language from a cognitive perspec‐tive ,this essay concludes that there is an outstanding unique feature in American Sign Language —double mapping metaphor ,which is a special way of expressing people’s understanding of the world through the combination of metaphors and iconicity ,and this is different from the way used in speech languages .The study aims to promote a deeper research of American Sign Language as one special form of human langua‐ges in cognitive field by this cognitive study of mataphors in American Sign Language .

  12. What sign language creation teaches us about language.

    Science.gov (United States)

    Brentari, Diane; Coppola, Marie

    2013-03-01

    How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website.

  13. Language Policies in Uruguay and Uruguayan Sign Language (LSU)

    Science.gov (United States)

    Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso

    2012-01-01

    In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…

  14. Conceptual representation of actions in sign language

    NARCIS (Netherlands)

    Dobel, Christian; Enriquez-Geppert, Stefanie; Hummert, Marja; Zwitserlood, Pienie; Bölte, Jens

    2011-01-01

    The idea that knowledge of events entails a universal spatial component, that is conceiving agents left of patients, was put to test by investigating native users of German sign language and native users of spoken German. Participants heard or saw event descriptions and had to illustrate the meaning

  15. The Sign Language Situation in Mali

    Science.gov (United States)

    Nyst, Victoria

    2015-01-01

    This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…

  16. Gesture, sign and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2015-10-05

    How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  17. A Sign Language Screen Reader for Deaf

    Science.gov (United States)

    El Ghoul, Oussama; Jemni, Mohamed

    Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.

  18. Interrogative constructions in Danish Sign Language (DSL)

    DEFF Research Database (Denmark)

    Hansen, Julie

    if and how manual question words are used. DSL uses distinct nonmanual signals to mark content and polar questions and my findings reveal a rich system of both manual and non-manual markers. The manual question words in DSL form a large paradigm of at least six items. The syntactic position of the manual...... question words can vary, though they usually appear sentence finally. The nonmanual signals include specific facial expressions, head posture and mouthing. Some of the features are shared with other sign languages. Furthermore, although it has not been investigated in detail it seems that the nonmanual...... of languages and the theoretical conclusions about how language works have primarily been based on studies of spoken languages. I believe that the study of DSL can provide additional and valuable insight into the possible structures of human language. Furthermore, this study will contribute...

  19. Segmentation of British Sign Language (BSL): Mind the gap!

    NARCIS (Netherlands)

    Orfanidou, E.; McQueen, J.M.; Adam, R.; Morgan, G.

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous sign

  20. A Stronger Reason for the Right to Sign Languages

    Science.gov (United States)

    Trovato, Sara

    2013-01-01

    Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…

  1. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals.

  2. SIGN LANGUAGE IN ASTRONOMY AND SPACE SCIENCES

    Directory of Open Access Journals (Sweden)

    J. Cova

    2009-01-01

    Full Text Available Teaching science to school children with hearing de ciency and impairment can be a rewarding and valuable experience for both teacher and student, and necessary to society as a whole in order to reduce the discriminative policies in the formal educational system. The one most important obstacle to the teaching of science to students with hearing de ciency and impairments is the lack of vocabulary in sign language to express the precise concepts encountered in scienti c endeavor. In a collaborative project between Centro de Investigaciones de Astronom a \\Francisco J. Duarte" (CIDA, Universidad Pedag gica Experimental Libertador-Instituto Pedag gico de Matur n (UPEL-IPM and Unidad Educativa Especial Bolivariana de Matur n (UEEBM initiated in 2006, we have attempted to ll this gap by developing signs for astronomy and space sciences terminology. During two three-day workshops carried out at CIDA in M rida in July 2006 and UPEL-IPM in Matur n in March 2007 a total of 112 concepts of astronomy and space sciences were coined in sign language using an interactive method which we describe in the text. The immediate goal of the project is to incorporate these terms into Venezuelan Sign Language (LSV.

  3. Sign Language in Astronomy and Space Sciences

    Science.gov (United States)

    Cova, J.; Movilio, V.; Gómez, Y.; Gutiérrez, F.; García, R.; Moreno, H.; González, F.; Díaz, J.; Villarroel, C.; Abreu, E.; Aparicio, D.; Cárdenas, J.; Casneiro, L.; Castillo, N.; Contreras, D.; La Verde, N.; Maita, M.; Martínez, A.; Villahermosa, J.; Quintero, A.

    2009-05-01

    Teaching science to school children with hearing deficiency and impairment can be a rewarding and valuable experience for both teacher and student, and necessary to society as a whole in order to reduce the discriminative policies in the formal educational system. The one most important obstacle to the teaching of science to students with hearing deficiency and impairments is the lack of vocabulary in sign language to express the precise concepts encountered in scientific endeavor. In a collaborative project between Centro de Investigaciones de Astronomía ``Francisco J. Duarte'' (CIDA), Universidad Pedagógica Experimental Libertador-Instituto Pedagógico de Maturín (UPEL-IPM) and Unidad Educativa Especial Bolivariana de Maturín (UEEBM) initiated in 2006, we have attempted to fill this gap by developing signs for astronomy and space sciences terminology. During two three-day workshops carried out at CIDA in Mérida in July 2006 and UPEL-IPM in Maturín in March 2007 a total of 112 concepts of astronomy and space sciences were coined in sign language using an interactive method which we describe in the text. The immediate goal of the project is to incorporate these terms into Venezuelan Sign Language (LSV).

  4. Effects of translation and performance on memory of words of Sign Language as a second language

    OpenAIRE

    2004-01-01

    An experiment was designed to investigate the effects of translation and performance on memory of words of Sign Language as a second language. An intermediate class of Sign Language learners, whose first language was Japanese, was required to carried out four tasks : translating from Japanese word into Sign Language word, oral reading of Japanese word, translating from Sign Language word into Japanese word, and performing (expressing) of Sign Language word. The subjects were then asked unexpe...

  5. Sign Language in the Intelligent Sensory Environment

    Directory of Open Access Journals (Sweden)

    Ákos Lisztes

    2005-06-01

    Full Text Available It is difficult for most of us to imagine, but many who are deaf rely on signlanguage as their primary means of communication. They, in essence, hear and talkthrough their hands. This paper proposes a system which is able to recognize the signsusing a video camera system. The recognized signs are reconstructed by the 3Dvisualization system as well. To accomplish this task a standard personal computer, a videocamera and a special software system was used. At the moment the software is able torecognize several letters from the sign language alphabet with the help of color marks. Thesign language recognition is a function of an Intelligent Space, which has ubiquitoussensory intelligence including various sensors, such as cameras, microphones, hapticdevices (for physical contact and actuators with ubiquitous computing background.

  6. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  7. Conjoining Word and Image in British Sign Language (BSL): An Exploration of Metaphorical Signs in BSL

    Science.gov (United States)

    Brennan, Mary

    2005-01-01

    The Lexicon of British Sign Language (BSL), including, and perhaps especially, the productive lexicon, is highly motivated. Many sign linguists in the last few decades have played down the role of iconicity and other types of motivation in signed language. They have suggested that because sign forms and structures conform to rules of linguistic…

  8. Study on Translating Chinese into Chinese Sign Language

    Institute of Scientific and Technical Information of China (English)

    徐琳; 高文

    2000-01-01

    Sign language is a visual-gestural language mainly used by hearing impaired people to communicate with each other. Gesture and facial expression are important grammar parts of sign language. In this paper, a text-based transfor mation method of Chinese-Chinese sign language machine translation is proposed.Gesture and facial expression models are created. And a practical system is im plemented. The input of the system is Chinese text. The output of the system is "graphics person" who can gesticulate Chinese sign language accompanied by facial expression that corresponds to the Chinese text entered so as to realize automatic translation from Chinese text to Chinese sign language.

  9. The emergence of temporal language in Nicaraguan Sign Language.

    Science.gov (United States)

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-11-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users.

  10. The Linguistics of British Sign Language: An Introduction.

    Science.gov (United States)

    Sutton-Spence, Rachel; Woll, Bencie

    This textbook provides support for learners of British Sign Language (BSL) and others interested in the structure and use of BSL, and assumes no previous knowledge of linguistics or sign language; technical terms and linguistic jargon are kept to a minimum. The text contains many examples from English, BSL, and other spoken and signed languages,…

  11. Sign Language and Integration in the British Deaf Community.

    Science.gov (United States)

    Deuchar, Margaret

    This paper deals with the integrative function of sign language in the British deaf community. Sign language communities exhibit a special case of diglossia in that they exist within a larger, hearing community not necessarily characterized by diglossia itself. British Sign Language includes at least two diglossic varieties, with different…

  12. Sign Language: Meeting Diverse Needs in the Classroom

    Science.gov (United States)

    Simpson, Cynthia G.; Lynch, Sharon A.

    2007-01-01

    For a number of years, sign language has been used in special education settings for learners with disabilities. Children with hearing loss, autism, cognitive disabilities, and language disorders have demonstrated improved communication skills with the use of signs. Recently, however, teachers have begun to use sign language with typical learners…

  13. SignMT: An Alternative Language Learning Tool

    Science.gov (United States)

    Ditcharoen, Nadh; Naruedomkul, Kanlaya; Cercone, Nick

    2010-01-01

    Learning a second language is very difficult, especially, for the disabled; the disability may be a barrier to learn and to utilize information written in text form. We present the SignMT, Thai sign to Thai machine translation system, which is able to translate from Thai sign language into Thai text. In the translation process, SignMT takes into…

  14. Sign Language with Babies: What Difference Does It Make?

    Science.gov (United States)

    Barnes, Susan Kubic

    2010-01-01

    Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…

  15. Adapting Tests of Sign Language Assessment for Other Sign Languages--A Review of Linguistic, Cultural, and Psychometric Problems

    Science.gov (United States)

    Haug, Tobias; Mann, Wolfgang

    2008-01-01

    Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from…

  16. A matter of complexity: Subordination in sign languages

    NARCIS (Netherlands)

    R. Pfau; M. Steinbach; A. Herrmann

    2016-01-01

    Since natural languages exist in two different modalities - the visual-gestural modality of sign languages and the auditory-oral modality of spoken languages - it is obvious that all fields of research in modern linguistics will benefit from research on sign languages. Although previous studies have

  17. Building an Assessment Use Argument for sign language: the BSL Nonsense Sign Repetition Test

    OpenAIRE

    Mann, W.; Marshall, C. R.

    2010-01-01

    In this article, we adapt a concept designed to structure language testing more effectively, the Assessment Use Argument (AUA), as a framework for the development and/or use of sign language assessments for deaf children who are taught in a sign bilingual education setting. By drawing on data from a recent investigation of deaf children's nonsense sign repetition skills in British Sign Language, we demonstrate the steps of implementing the AUA in practical test design, development and use. Th...

  18. Should All Deaf Children Learn Sign Language?

    Science.gov (United States)

    Napoli, Donna Jo; Mellon, Nancy K; Niparko, John K; Rathmann, Christian; Mathur, Gaurav; Humphries, Tom; Handley, Theresa; Scambler, Sasha; Lantos, John D

    2015-07-01

    Every year, 10,000 infants are born in the United States with sensorineural deafness. Deaf children of hearing (and nonsigning) parents are unique among all children in the world in that they cannot easily or naturally learn the language that their parents speak. These parents face tough choices. Should they seek a cochlear implant for their child? If so, should they also learn to sign? As pediatricians, we need to help parents understand the risks and benefits of different approaches to parent-child communication when the child is deaf [corrected].

  19. Sign Language Video Processing for Text Detection in Hindi Language

    Directory of Open Access Journals (Sweden)

    Rashmi B Hiremath

    2016-10-01

    Full Text Available Sign language is a way of expressing yourself with your body language, where every bit of ones expressions, goals, or sentiments are conveyed by physical practices, for example, outward appearances, body stance, motions, eye movements, touch and the utilization of space. Non-verbal communication exists in both creatures and people, yet this article concentrates on elucidations of human non-verbal or sign language interpretation into Hindi textual expression. The proposed method of implementation utilizes the image processing methods and synthetic intelligence strategies to get the goal of sign video recognition. To carry out the proposed task implementation it uses image processing methods such as frame analysing based tracking, edge detection, wavelet transform, erosion, dilation, blur elimination, noise elimination, on training videos. It also uses elliptical Fourier descriptors called SIFT for shape feature extraction and most important part analysis for feature set optimization and reduction. For result analysis, this paper uses different category videos such as sign of weeks, months, relations etc. Database of extracted outcomes are compared with the video fed to the system as a input of the signer by a trained unclear inference system.

  20. 自然手语与文法手语的区别%A Brief Analysis of Differences Between Natural Sign Language and Chinese Sign Language

    Institute of Scientific and Technical Information of China (English)

    由婧涵

    2014-01-01

    Natural sign language and Chinese sign language are the two common ways for deaf people to communicate with deaf people and the hearing people. In 1960, Dr. More, the authority of American linguistic history, drew a conclusion that sign language is a language of a linguistics. Since then sign language is studied as a language. This article discusses the differences between natural sign language and Chinese sign language from the following three aspects: the scope of application differences, the characteristic differences and the grammatical differences. In order to get the facilitate intuitive experience of grammatical differences, many examples are used in the article.%自然手语和文法手语是聋人间及聋人与听人间交流的两种普遍方式。自从1960年美国语言学权威史多基博士得出了手语是一门语言学意义上的语言的结论后,手语便作为语言来研究。文章从两者的适用范围差异、特点上的差异、语法的差异这三方面进行论述,在语法的差异上,例举实例,便于直观地体会二者在语法上的差异。

  1. Individual differences in sign language abilities in deaf children.

    Science.gov (United States)

    Meronen, Auli; Ahonen, Timo

    2008-01-01

    The study attempted to identify characteristics of individual differences in sign language abilities among deaf children. Connections between sign language skills and rapid serial naming, hand motor skills, and early fluency were investigated. The sample consisted of 85 Finnish deaf children. Their first language was sign language. Simple correlations and multiple linear-regression analysis demonstrated the effect of early language development and serial hand movements on sign language abilities. Other significant factors were serial fingertapping and serial naming. Heterogeneity in poor sign language users was noted. Although identifying learning disorders in deaf children is complicated, developmental difficulties can be discovered by appropriate measurements. The study confirmed the results of earlier research demonstrating that the features of deaf and hearing children's learning resemble each other. Disorders in signed and spoken languages may have similar bases despite their different modalities.

  2. Quantifiers and Variables: Insights from Sign Language (ASL and LSF

    Directory of Open Access Journals (Sweden)

    Philippe Schlenker

    2010-12-01

    Full Text Available In standard logical systems, quantifiers and variables are essential to express complex relations among objects. Natural language has expressions that have an analogous function: some noun phrases play the role of quantifiers (e.g. every man, and some pronouns play the role of variables (e.g. him, as in Every man likes people who admire him. Since the 1980’s, there has been a vibrant debate in linguistics about the way in which pronouns come to depend on their antecedents. According to one view, natural language is governed by a ‘dynamic’ logic which allows for dependencies that are far more flexible than those of standard (classical logic. According to a competing view, the treatment of variables in classical logic does not have to be fundamentally revised to be applied to natural language. While the debate centers around the nature of the formal links that connect pronouns to their antecedents, these links are not overtly expressed in spoken language, and the debate has remained open. In sign language, by contrast, the connection between pronouns and their antecedents is often made explicit by pointing. We argue that data from French and American Sign Language provide crucial evidence for the dynamic approach over one of its main classical competitors; and we explore further sign language data that can help choose among competing dynamic analyses.ReferencesBahan, B., Kegl, J., MacLaughlin, D. & Neidle, C. 1995. ‘Convergent Evidence for the Structure of Determiner Phrases in American Sign Language’. In L. Gabriele, D. Hardison & R. Westmoreland (eds. ‘FLSM VI, Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of Mid-America, Volume Two’, 1–12. Bloomington, IN: Indiana University Linguistics Club Publications.Brasoveanu, A. 2006. Structured Nominal and Modal Reference. Ph.D. thesis, Rutgers, The State University of New Jersey.Brasoveanu, A. 2010. ‘Decomposing Modal Quantification’. Journal of Semantics

  3. Sociolinguistic Variation and Change in British Sign Language Number Signs: Evidence of Leveling?

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas

    2015-01-01

    This article presents findings from the first major study to investigate lexical variation and change in British Sign Language (BSL) number signs. As part of the BSL Corpus Project, number sign variants were elicited from 249 deaf signers from eight sites throughout the UK. Age, school location, and language background were found to be significant…

  4. An Investigation of the Sign Language Community of the Deaf: Can Anyone Join?

    Science.gov (United States)

    Dolby, Kathy

    1992-01-01

    A survey of 56 deaf adults in England and Canada found that respondents perceived themselves as members of a definable deaf community. Results also indicated the importance of shared language (American or British Sign Language) and the possible community inclusion of individuals without deafness if their attitude is one of commitment to the…

  5. The Mechanics of Fingerspelling: Analyzing Ethiopian Sign Language

    Science.gov (United States)

    Duarte, Kyle

    2010-01-01

    Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…

  6. Phonological development in hearing learners of a sign language: The role of sign complexity and iconicity

    NARCIS (Netherlands)

    Ortega, G.; Morgan, G.

    2015-01-01

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that tr

  7. Forced Choice Recognition of Sign in Novice Learners of British Sign Language.

    Science.gov (United States)

    Campbell, Ruth; And Others

    1992-01-01

    Investigation of the accuracy of novice learners of British Sign Language (BSL) and sign-naive subjects in recognizing possible and impossible BSL signs and in naming signs suggests that rated iconicity and the ability to process potentially meaningful gestures, determined recognition and naming accuracy. (19 references) (Author/CB)

  8. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  9. Linguistic Policies, Linguistic Planning, and Brazilian Sign Language in Brazil

    Science.gov (United States)

    de Quadros, Ronice Muller

    2012-01-01

    This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…

  10. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  11. CONTEMPOPARY VIEWS TO SIGN LANGUAGE OF HEARING IMPAIRED

    Directory of Open Access Journals (Sweden)

    Bojka TATAREVA

    1998-04-01

    Full Text Available The place of the sign language in education of hearing impaired children in Denmark, USA and Sweden.Hearing impaired people ought to have a possibility of access to vital information, so they can move step by step, to live as useful members of society.Sign language is nonverbal communication which appears as a kind of compensation of the language lack, a means of development of that activity an opinion of unlimited human communicative nature.Mimic sign language in the system of education of hearing impaired children in Denmark, USA and Sweden take a primary place. The school with Hearing impaired children are bilingual. In the schools sign language is taken as a training language and it is available to every child.Contemporary views and practice tell us that teaching of hearing impaired children with sign language is more effective and more available.

  12. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    Science.gov (United States)

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  13. Synesthesia for manual alphabet letters and numeral signs in second-language users of signed languages.

    Science.gov (United States)

    Atkinson, Joanna; Lyons, Tanya; Eagleman, David; Woll, Bencie; Ward, Jamie

    2016-08-01

    Many synesthetes experience colors when viewing letters or digits. We document, for the first time, an analogous phenomenon among users of signed languages who showed color synesthesia for fingerspelled letters and signed numerals. Four synesthetes experienced colors when they viewed manual letters and numerals (in two cases, colors were subjectively projected on to the hands). There was a correspondence between the colors experienced for written graphemes and their manual counterparts, suggesting that the development of these two types of synesthesia is interdependent despite the fact that these systems are superficially distinct and rely on different perceptual recognition mechanisms in the brain.

  14. Signing Earth Science: Accommodations for Students Who Are Deaf or Hard of Hearing and Whose First Language Is Sign

    Science.gov (United States)

    Vesel, J.; Hurdich, J.

    2014-12-01

    TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.

  15. Information and Signs: The Language of Images

    Directory of Open Access Journals (Sweden)

    Inna Semetsky

    2010-03-01

    Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.

  16. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  17. Constraints on Negative Prefixation in Polish Sign Language.

    Science.gov (United States)

    Tomaszewski, Piotr

    2015-01-01

    The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.

  18. A Double-Edged Sword: Social Media as a Tool of Online Disinhibition Regarding American Sign Language and Deaf Cultural Experience Marginalization, and as a Tool of Cultural and Linguistic Exposure

    Directory of Open Access Journals (Sweden)

    K. Crom Saunders

    2016-01-01

    Full Text Available Social media has become a venue for social awareness and change through forum discussions and exchange of viewpoints and information. The rate at which awareness and cultural understanding regarding specific issues has not been quantified, but examining awareness about issues relevant to American Sign Language (ASL and American Deaf culture indicates that progress in increasing awareness and cultural understanding via social media faces greater friction and less progress compared to issues relevant to other causes and communities, such as feminism, the lesbian, gay, bisexual, and transgender (LGBT community, or people of color. The research included in this article examines online disinhibition, cyberbullying, and audism as it appears in the real world and online, advocacy for and against Deafness as a cultural identity, and a history of how Deaf people are represented in different forms of media, including social media. The research itself is also examined in terms of who conducts the research. The few incidents of social media serving the Deaf community in a more positive manner are also examined. This is to provide contrast to determine which factors may contribute to greater progress in fostering greater awareness of Deaf cultural issues without the seemingly constant presence of resistance and lack of empathy for the Deaf community’s perspectives on ASL and Deaf culture.

  19. Input Processing at First Exposure to a Sign Language

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…

  20. How grammar can cope with limited short-term memory: simultaneity and seriality in sign languages.

    Science.gov (United States)

    Geraci, Carlo; Gozzi, Marta; Papagno, Costanza; Cecchetto, Carlo

    2008-02-01

    It is known that in American Sign Language (ASL) span is shorter than in English, but this discrepancy has never been systematically investigated using other pairs of signed and spoken languages. This finding is at odds with results showing that short-term memory (STM) for signs has an internal organization similar to STM for words. Moreover, some methodological questions remain open. Thus, we measured span of deaf and matched hearing participants for Italian Sign Language (LIS) and Italian, respectively, controlling for all the possible variables that might be responsible for the discrepancy: yet, a difference in span between deaf signers and hearing speakers was found. However, the advantage of hearing subjects was removed in a visuo-spatial STM task. We attribute the source of the lower span to the internal structure of signs: indeed, unlike English (or Italian) words, signs contain both simultaneous and sequential components. Nonetheless, sign languages are fully-fledged grammatical systems, probably because the overall architecture of the grammar of signed languages reduces the STM load. Our hypothesis is that the faculty of language is dependent on STM, being however flexible enough to develop even in a relatively hostile environment.

  1. Evaluating Effects of Language Recognition on Language Rights and the Vitality of New Zealand Sign Language

    Science.gov (United States)

    McKee, Rachel Locker; Manning, Victoria

    2015-01-01

    Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…

  2. Signs of Resistance: Peer Learning of Sign Languages within "Oral" Schools for the Deaf

    Science.gov (United States)

    Anglin-Jaffe, Hannah

    2013-01-01

    This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…

  3. Tools for language: patterned iconicity in sign language nouns and verbs.

    Science.gov (United States)

    Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon

    2015-01-01

    When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use.

  4. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  5. Sign Vocabulary in Deaf Toddlers Exposed to Sign Language since Birth

    Science.gov (United States)

    Rinaldi, Pasquale; Caselli, Maria Cristina; Di Renzo, Alessio; Gulli, Tiziana; Volterra, Virginia

    2014-01-01

    Lexical comprehension and production is directly evaluated for the first time in deaf signing children below the age of 3 years. A Picture Naming Task was administered to 8 deaf signing toddlers (aged 2-3 years) who were exposed to Sign Language since birth. Results were compared with data of hearing speaking controls. In both deaf and hearing…

  6. Lexical access in Catalan Sign Language (LSC) production

    NARCIS (Netherlands)

    C. Baus; E. Gutiérrez-Sigut; J. Quer; M. Carreiras

    2008-01-01

    This paper investigates whether the semantic and phonological levels in speech production are specific to spoken languages or universal across modalities. We examined semantic and phonological effects during Catalan Signed Language (LSC: Llengua de Signes Catalana) production using an adaptation of

  7. Someone should know how to use sign language.

    Science.gov (United States)

    Gibson, Christine

    1999-07-28

    The article 'Audiology and hearing impairment - improving the quality of care' (Art&Science July 14) was very interesting and clearly written. However as the mother of a deaf child who uses total communication - speech, lipreading and British Sign Language, I was amazed that in the section on communication there was no mention of sign language.

  8. Meemul Tziij: An Indigenous Sign Language Complex of Mesoamerica

    Science.gov (United States)

    Tree, Erich Fox

    2009-01-01

    This article examines sign languages that belong to a complex of indigenous sign languages in Mesoamerica that K'iche'an Maya people of Guatemala refer to collectively as Meemul Tziij. It explains the relationship between the Meemul Tziij variety of the Yukatek Maya village of Chican (state of Yucatan, Mexico) and the hitherto undescribed Meemul…

  9. The Effect of New Technologies on Sign Language Research

    Science.gov (United States)

    Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam

    2013-01-01

    This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…

  10. Neural stages of spoken, written, and signed word processing in beginning second language learners.

    Science.gov (United States)

    Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric

    2013-01-01

    WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.

  11. A dictionary of Astronomy for the French Sign Language (LSF)

    Science.gov (United States)

    Proust, Dominique; Abbou, Daniel; Chab, Nasro

    2011-06-01

    Since a few years, the french deaf communauty have access to astronomy at Paris-Meudon observatory through a specific teaching adapted from the French Sign Language (Langue des Signes Françcaise, LSF) including direct observations with the observatory telescopes. From this experience, an encyclopedic dictionary of astronomy The Hands in the Stars is now available, containing more than 200 astronomical concepts. Many of them did not existed in Sign Language and can be now fully expressed and explained.

  12. Examination of Sign Language Education According to the Opinions of Members from a Basic Sign Language Certification Program

    Science.gov (United States)

    Akmese, Pelin Pistav

    2016-01-01

    Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…

  13. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture.

    Science.gov (United States)

    Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne

    2015-09-15

    Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.

  14. Looking through phonological shape to lexical meaning: the bottleneck of non-native sign language processing.

    Science.gov (United States)

    Mayberry, R I; Fischer, S D

    1989-11-01

    In two studies, we find that native and non-native acquisition show different effects on sign language processing. Subjects were all born deaf and used sign language for interpersonal communication, but first acquired it at ages ranging from birth to 18. In the first study, deaf signers shadowed (simultaneously watched and reproduced) sign language narratives given in two dialects, American Sign Language (ASL) and Pidgin Sign English (PSE), in both good and poor viewing conditions. In the second study, deaf signers recalled and shadowed grammatical and ungrammatical ASL sentences. In comparison with non-native signers, natives were more accurate, comprehended better, and made different kinds of lexical changes; natives primarily changed signs in relation to sign meaning independent of the phonological characteristics of the stimulus. In contrast, non-native signers primarily changed signs in relation to the phonological characteristics of the stimulus independent of lexical and sentential meaning. Semantic lexical changes were positively correlated to processing accuracy and comprehension, whereas phonological lexical changes were negatively correlated. The effects of non-native acquisition were similar across variations in the sign dialect, viewing condition, and processing task. The results suggest that native signers process lexical structural automatically, such that they can attend to and remember lexical and sentential meaning. In contrast, non-native signers appear to allocate more attention to the task of identifying phonological shape such that they have less attention available for retrieval and memory of lexical meaning.

  15. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf Aboriginal…

  16. On the syntax of spatial adpositions in sign languages

    NARCIS (Netherlands)

    Pfau, R.; Aboh, E.O.

    2012-01-01

    In investigations of sign language grammar - phonology, morphology, and syntax - the impact of language modality on grammar is a recurrent issue. The term 'modality,' as used in this context, refers to the distinction between languages that are expressed and perceived in the oral-auditive modality (

  17. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  18. Observations on Word Order in Saudi Arabian Sign Language

    Science.gov (United States)

    Sprenger, Kristen; Mathur, Gaurav

    2012-01-01

    This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…

  19. A Hearer's Insight into Deaf Sign Language Folklore

    Directory of Open Access Journals (Sweden)

    Liina Paales

    2004-10-01

    Full Text Available The article discusses Estonian deaf lore, which comprises all folklore genres including specific language creation or sign lore characteristic of the deaf. Estonian sign language lore contains material of local as well as of international origin. The latter group includes several humorous tales that have spread mostly through the cultural contacts of the younger generation of the deaf. Hearers’ lore has also exerted its influence on deaf lore. Local deaf lore includes memories of school years and family lore of members of the Estonian deaf community, sign lore based on Estonian sign language, etc. The main features of Estonian deaf lore are (i the specific communicative form, i.e. sign language performance; (ii the minority group of lore transmitters, i.e. the Estonian deaf community; (iii group-centred interpretation of hearing loss.

  20. Vygotsky, sign language, and the education of deaf pupils.

    Science.gov (United States)

    Zaitseva, G; Pursglove, M; Gregory, S

    1999-01-01

    This article considers the impact of Vygotsky on the education of deaf children in Russia and is a translation/adaptation of an article currently being published in Defektologiia. While Vygotsky perceived sign language as limited in some aspects nevertheless, he always considered that it had a role in education of deaf pupils. He believed that sign language should not be 'treated like an the enemy' and said that 'bilingualism of def people is an objective reality'. However, sign language was banned from Russian schools following a conference decision in 1938. The changing political climate in Russia has lead to the reevaluation of many aspects of life, including approaches to education, and to a reassessment of Vygotsky's ideas and an appreciation of their continuing relevance. Among other things, this has resulted in a reevaluation of the role of sign language for deaf pupils and an emerging interest in sign bilingualism.

  1. Failing American Indian Languages

    Science.gov (United States)

    Meek, Barbara A.

    2011-01-01

    This article critically examines the mediating role of scholarly expectations and the unexpected in the management--and transcendence--of failure/success as these concepts relate to language revitalization. Deloria remarks that, "expectations tend to assume a status quo defined around failure, the result of some innate limitation on the part of…

  2. Persian Sign Language Recognition Using Radial Distance and Fourier Transform

    Directory of Open Access Journals (Sweden)

    Bahare Jalilian

    2013-11-01

    Full Text Available This paper provides a novel hand gesture recognition method to recognize 32 static signs of the Persian Sign Language (PSL alphabets. Accurate hand segmentation is the first and important step in sign language recognition systems. Here, we propose a method for hand segmentation that helps to build a better vision based sign language recognition system. The proposed method is based on YCbCr color space, single Gaussian model and Bayes rule. It detects region of hand in complex background and non-uniform illumination. Hand gesture features are extracted by radial distance and Fourier transform. Finally, the Euclidean distanceis used to compute the similarity between the input signs and all training feature vectors in the database. The system is tested on 480 posture images of the PSL, 15 images for each 32 signs. Experimental results show that our approach is capable to recognize all 32 PSL alphabets with 95.62% recognition rate.

  3. Aphasia in a user of British Sign Language: Dissociation between sign and gesture.

    Science.gov (United States)

    Marshall, Jane; Atkinson, Jo; Smulovitch, Elaine; Thacker, Alice; Woll, Bencie

    2004-07-01

    This paper reports a single case investigation of "Charles", a Deaf man with sign language aphasia following a left CVA. Anomia, or a deficit in sign retrieval, was a prominent feature of his aphasia, and this showed many of the well-documented characteristics of speech anomia. For example, sign retrieval was sensitive to familiarity, it could be cued, and there were both semantic and phonological errors. Like a previous case in the literature (Corina, Poizner, Bellugi, Feinberg, Dowd, & O'Grady-Batch, 1992), Charles demonstrated a striking dissociation between sign and gesture, since his gesture production was relatively intact. This dissociation was impervious to the iconicity of signs. So, Charles' sign production showed no effect of iconicity, and gesture production was superior to sign production even when the forms of the signs and gestures were similar. The implications of these findings for models of sign and gesture production are discussed.

  4. The role of syllables in sign language production.

    Science.gov (United States)

    Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel

    2014-01-01

    The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.

  5. Static sign language recognition using 1D descriptors and neural networks

    Science.gov (United States)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  6. Legal Pathways to the Recognition of Sign Languages: A Comparison of the Catalan and Spanish Sign Language Acts

    Science.gov (United States)

    Quer, Josep

    2012-01-01

    Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…

  7. Sign vocabulary in deaf toddlers exposed to sign language since birth.

    Science.gov (United States)

    Rinaldi, Pasquale; Caselli, Maria Cristina; Di Renzo, Alessio; Gulli, Tiziana; Volterra, Virginia

    2014-07-01

    Lexical comprehension and production is directly evaluated for the first time in deaf signing children below the age of 3 years. A Picture Naming Task was administered to 8 deaf signing toddlers (aged 2-3 years) who were exposed to Sign Language since birth. Results were compared with data of hearing speaking controls. In both deaf and hearing children, comprehension was significantly higher than production. The deaf group provided a significantly lower number of correct responses in production than did the hearing controls, whereas in comprehension, the 2 groups did not differ. Difficulty and ease of items in comprehension and production was similar for signing deaf children and hearing speaking children, showing that, despite size differences, semantic development followed similar paths. For signing children, predicates production appears easier than nominals production compared with hearing children acquiring spoken language. Findings take into account differences in input modalities and language structures.

  8. ONE GRAMMAR OR TWO? Sign Languages and the Nature of Human Language.

    Science.gov (United States)

    Lillo-Martin, Diane C; Gajewski, Jon

    2014-01-01

    Linguistic research has identified abstract properties that seem to be shared by all languages - such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality, but also in the form of sign languages, has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language - in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression.

  9. Astrological signs and personality in Kuwaitis and Americans.

    Science.gov (United States)

    Abdel-Khalek, Ahmed; Lester, David

    2006-04-01

    Samples of Kuwaiti (N=460) and American (N=273) undergraduates responded to six personality questionnaires to assess optimism, pessimism, suicidal ideation, ego-grasping, death anxiety, general anxiety, and obssessive-compulsiveness. Each participant was assigned to the astrological sign associated with date of birth. One-way analyses of variance yielded nonsignificant F ratios for all the seven scales in both Kuwaiti and American samples, except for anxiety scores among Americans. It was concluded that there was little support for an association between astrological sun signs and scores on the present personality scales.

  10. Directionality Effects in Simultaneous Language Interpreting: The Case of Sign Language Interpreters in the Netherlands

    Science.gov (United States)

    van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…

  11. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    OpenAIRE

    2005-01-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on...

  12. Segmentation of British Sign Language (BSL): mind the gap!

    Science.gov (United States)

    Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.

  13. Lexical access in sign language: a computational model.

    Science.gov (United States)

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  14. Lexical access in sign language: A computational model

    Directory of Open Access Journals (Sweden)

    Naomi Kenney Caselli

    2014-05-01

    Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  15. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing.

  16. Information Status and Word Order in Croatian Sign Language

    Science.gov (United States)

    Milkovic, Marina; Bradaric-Joncic, Sandra; Wilbur, Ronnie B.

    2007-01-01

    This paper presents the results of research on information structure and word order in narrative sentences taken from signed short stories in Croatian Sign Language (HZJ). The basic word order in HZJ is SVO. Factors that result in other word orders include: reversible arguments, verb categories, locative constructions, contrastive focus, and prior…

  17. An Intelligent Computer-Based System for Sign Language Tutoring

    Science.gov (United States)

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  18. Diglossia and British Sign Language. Sociolinguistic Working Paper Number 46.

    Science.gov (United States)

    Deuchar, Margaret

    A study of the nature and function of British Sign Language (BSL) as used in the British deaf community is described. The study examined two hypotheses: (1) that the notion of diglossia applies to the British deaf signing community, and (2) that the low variety of BSL will exploit the visual medium in its grammar to a greater extent than the high…

  19. Investigating Deaf Children's Vocabulary Knowledge in British Sign Language

    Science.gov (United States)

    Mann, Wolfgang; Marshall, Chloe

    2012-01-01

    This study explores different aspects of the mapping between phonological form and meaning of signs in British Sign Language (BSL) by means of four tasks to measure meaning recognition, form recognition, form recall, and meaning recall. The aim was to investigate whether there is a hierarchy of difficulty for these tasks and, therefore, whether…

  20. Selected Lexical Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette

    2012-01-01

    This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…

  1. Basic Color Terms in Estonian Sign Language

    Science.gov (United States)

    Hollman, Liivi; Sutrop, Urmas

    2011-01-01

    The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…

  2. Brazilian Sign Language Lexicography and Technology: Dictionary, Digital Encyclopedia, Chereme-based Sign Retrieval, and Quadriplegic Deaf Communication Systems.

    Science.gov (United States)

    Capovilla, Fernando C.; Duduchi, Marcelo; Raphael, Walkiria D.; Luz, Renato D.; Rozados, Daniela; Capovilla, Alessandra G. S.; Macedo, Elizeu C.

    2003-01-01

    Discusses the Brazilian Sign language digital encyclopedia, which contains a databank of 5,600 signs glossed in Portuguese and English, along with descriptions and illustrations of their signed form. (Author/VWL)

  3. Writing Profiles of Deaf Children Taught through British Sign Language

    Science.gov (United States)

    Burman, Diana; Nunes, Terezinha; Evans, Deborah

    2007-01-01

    Congenitally, profoundly deaf children whose first language is British Sign Language (BSL) and whose speech is largely unintelligible need to be literate to communicate effectively in a hearing society. Both spelling and writing skills of such children can be limited, to the extent that no currently available assessment method offers an adequate…

  4. The Influence of English on British Sign Language.

    Science.gov (United States)

    Sutton-Spence, Rachel

    1999-01-01

    Details the influence of English on British Sign Language (BSL) at the syntactic, morphological, lexical, idiomatic, and phonological levels. Shows how BSL uses loan translations, fingerspellings, and the use of mouth patterns derived from English language spoken words to include elements from English. (Author/VWL)

  5. Ideologies and Attitudes toward Sign Languages: An Approximation

    Science.gov (United States)

    Krausneker, Verena

    2015-01-01

    Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.

  6. On Selected Morphemes in Saudi Arabian Sign Language

    Science.gov (United States)

    Morris, Carla; Schneider, Erin

    2012-01-01

    Following a year of study of Saudi Arabian Sign Language (SASL), we are documenting our findings to provide a grammatical sketch of the language. This paper represents one part of that endeavor and focuses on a description of selected morphemes, both manual and non-manual, that have appeared in the course of data collection. While some of the…

  7. Sign Languages: Contribution to Neurolinguistics from Cross-Modal Research

    Science.gov (United States)

    Malaia, Evie; Wilbur, Ronnie

    2010-01-01

    Using sign language research as an example, we argue that both the cross-linguistic descriptive approach to data, advocated by Evans and Levinson (2009), as well as abstract (‘formal’) analyses are necessary steps towards the development of “neurolinguistic primitives” for investigating how human languages are instantiated in the brain. PMID:20953339

  8. Sign Language Planning in the Netherlands between 1980 and 2010

    Science.gov (United States)

    Schermer, Trude

    2012-01-01

    This article discusses several aspects of language planning with respect to Sign Language of the Netherlands, or Nederlandse Gebarentaal (NGT). For nearly thirty years members of the Deaf community, the Dutch Deaf Council (Dovenschap) have been working together with researchers, several organizations in deaf education, and the organization of…

  9. The Digital Playground: Kindergarten Children Learning Sign Language through Multimedia

    Science.gov (United States)

    Ellis, Kirsten; Blashki, Kathy

    2007-01-01

    The article discusses a study of 4-5 year old children's use of technology to assist and enhance the acquisition of a play lexicon within a formal educational setting. The new language system to be learned was Auslan, a signed/nonverbal language. A purpose specific software program was developed by the authors, "Auslan Kids," in order to…

  10. Recognition of Indian Sign Language in Live Video

    Science.gov (United States)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  11. On the System of Place Name Signs in Estonian Sign Language

    Directory of Open Access Journals (Sweden)

    Liina Paales

    2010-12-01

    Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.

  12. American Sign Language and Early Intervention

    Science.gov (United States)

    Snoddon, Kristin

    2008-01-01

    Since the beginning of the twenty-first century, the introduction in several countries of universal neonatal hearing screening programs has changed the landscape of education for deaf children. Due to the increasing provision of early intervention services for children identified with hearing loss, public education for deaf children often starts…

  13. The Assessment and Achievement of Proficiency in a Native Sign Language within a Sign Bilingual Program: The Pilot Auslan Receptive Skills Test

    Science.gov (United States)

    Johnston, Trevor

    2004-01-01

    The assessment of sign language proficiency is essential for evaluating the outcomes of sign bilingual education. This paper reports an attempt to assess the sign language proficiency of children in a self-described sign bilingual program in Sydney by adapting a British Sign Language (BSL) test to Australian Sign Language (Auslan). The test…

  14. Rating the Vitality of Sign Languages

    Science.gov (United States)

    Bickford, J. Albert; Lewis, M. Paul; Simons, Gary F.

    2015-01-01

    The Expanded Graded Intergenerational Disruption Scale (EGIDS), developed by Lewis and Simons and based on work by Fishman, provides a means of rating "language vitality"--the level of development or endangerment--where "development" is understood as adding or preserving functions and "endangerment" as loss of…

  15. The history of sign language and deaf education in Turkey.

    Science.gov (United States)

    Kemaloğlu, Yusuf Kemal; Kemaloğlu, Pınar Yaprak

    2012-01-01

    Sign language is the natural language of the prelingually deaf people particularly without hearing-speech rehabilitation. Otorhinolaryngologists, regarding health as complete physical, mental and psychosocial well-being, aim hearing by diagnosing deafness as deviance from normality. However, it's obvious that the perception conflicted with the behavior which does not meet the mental and social well-being of the individual also contradicts with the definition mentioned above. This article aims to investigate the effects of hearing-speech target ignoring the sign language in Turkish population and its consistency with the history through statistical data, scientific publications and historical documents and to support critical perspective on this issue. The study results showed that maximum 50% of the deaf benefited from hearing-speech program for last 60 years before hearing screening programs; however, systems including sign language in education were not generated. In the light of these data, it is clear that the approach ignoring sign language particularly before the development of screening programs is not reasonable. In addition, considering sign language being part of the Anatolian history from Hittites to Ottomans, it is a question to be answered that why evaluation, habilitation and education systems excluding sign language are still the only choice for deaf individuals in Turkey. Despite legislative amendments in the last 6-7 years, the primary cause of failure to come into force is probably because of inadequate conception of the issue content and importance, as well as limited effort to offer solutions by academicians and authorized politicians. Within this context, this paper aims to make a positive effect on this issue offering a review for the medical staff, particularly otorhinolaryngologists and audiologists.

  16. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC).

    Science.gov (United States)

    Baus, Cristina; Costa, Albert

    2015-06-03

    This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated.

  17. Sign-language interpretation in psychotherapy with deaf patients.

    Science.gov (United States)

    Porter, A

    1999-01-01

    Sporadic encounters with deaf patients seeking psychotherapy present a challenge to general clinicians outside of specialized services for the deaf. Skills for working with people who do not share one's own language mode and culture are not routinely taught in most training programs, so clinicians may be unprepared when they first encounter a deaf patient. While it would be ideal to be able to match deaf patients with therapists fluent in their preferred language mode, this is often not feasible in smaller centers. Working with a trained professional sign-language interpreter can be a productive alternative, as long as patient, therapist, and interpreter understand and are comfortable with the process. Peer-reviewed literature on sign language interpretation in psychotherapy is sparse, but some practical guidelines can be gleaned from it and supplemented by information provided by the deaf community through the Internet. This paper arose from one psychiatric resident's first experience of psychotherapy working with a sign-language interpreter, and summarizes the literature search that resulted from a quest for understanding of deaf culture and experience, of the unique characteristics of sign language, and of the effects on the therapeutic relationship of the presence of the interpreter.

  18. Training Literacy Skills through Sign Language

    Science.gov (United States)

    Rudner, Mary; Andin, Josefine; Rönnberg, Jerker; Heimann, Mikael; Hermansson, Anders; Nelson, Keith; Tjus, Tomas

    2015-01-01

    The literacy skills of deaf children generally lag behind those of their hearing peers. The mechanisms of reading in deaf individuals are only just beginning to be unraveled but it seems that native language skills play an important role. In this study 12 deaf pupils (six in grades 1-2 and six in grades 4-6) at a Swedish state primary school for…

  19. Toward a Motor Theory of Sign Language Perception

    CERN Document Server

    Gibet, Sylvie; Duarte, Kyle

    2012-01-01

    Researches on signed languages still strongly dissociate lin- guistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the di erent concepts and provide avenues for future work.

  20. Analysis on Public Signs from the Perspective of Language Economics

    Institute of Scientific and Technical Information of China (English)

    Ma Yue

    2016-01-01

    For a long time, public signs have been the focus of many researchers and scholars. They probe into this kind of meaningful and special language from different perspectives by adopting theories from a large scale of areas. However, the researches were mostly limited in several hot fields, like linguistics, translation, culture or travelling. Nevertheless, in the year of 1965, Jacob Marschak(1965), an economic professor in the University of Los Angeles presented a theory, which turned the researchers' eyes to a brand new region, which is language economics. In the following paper, the author will combine the theory of cost and benefit with public signs. Two types of public signs will be particularly analysed using the theory of cost and benefit that belongs to language economics.

  1. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  2. Facilitating Exposure to Sign Languages of the World: The Case for Mobile Assisted Language Learning

    Science.gov (United States)

    Parton, Becky Sue

    2014-01-01

    Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…

  3. Deficits in Narrative Abilities in Child British Sign Language Users with Specific Language Impairment

    Science.gov (United States)

    Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary

    2014-01-01

    This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal…

  4. Imitation, sign language skill and the Developmental Ease of Language Understanding (D-ELU model

    Directory of Open Access Journals (Sweden)

    Emil eHolmer

    2016-02-01

    Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills

  5. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into

  6. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.

  7. Why Doesn't Everyone Here Speak Sign Language? Questions of Language Policy, Ideology and Economics

    Science.gov (United States)

    Rayman, Jennifer

    2009-01-01

    This paper is a thought experiment exploring the possibility of establishing universal bilingualism in Sign Languages. Focusing in the first part on historical examples of inclusive signing societies such as Martha's Vineyard, the author suggests that it is not possible to create such naturally occurring practices of Sign Bilingualism in societies…

  8. Issues in designing an assessment of British Sign Language development.

    Science.gov (United States)

    Herman, R

    1998-01-01

    This paper reports on a collaborative project in progress to develop a standardised clinical assessment of British Sign Language development for use with deaf children. The need for such an assessment is highlighted following a survey of professionals working in this area (Herman, in press). The development of the assessment battery will be described in the context of research into the assessment of sign language development. Issues in selection of the standardisation population will be presented. Finally the need for collaboration between different professionals working in this area, in particular the key role of the deaf BSL user will be emphasised.

  9. Access all areas - sign language interpreting, is it that special?

    OpenAIRE

    Stone, C.

    2010-01-01

    This article addresses some of the uniqueness and many of the similarities between working as a sign language interpreter and working as a public service interpreter in the UK. It gives a brief introduction to the history of the British Deaf community and the genesis of modern day British Sign Language (BSL). It then introduces the ever expanding areas where interpreters work and gives some examples of the care needed when working in the medical domain. It gives examples of the types of inter...

  10. Lexical and sentential processing in British Sign Language.

    Science.gov (United States)

    MacSweeney, Mairéad; Campbell, Ruth; Woll, Bencie; Brammer, Michael J; Giampietro, Vincent; David, Anthony S; Calvert, Gemma A; McGuire, Philip K

    2006-01-01

    Studies of spoken and written language suggest that the perception of sentences engages the left anterior and posterior temporal cortex and the left inferior frontal gyrus to a greater extent than non-sententially structured material, such as word lists. This study sought to determine whether the same is true when the language is gestural and perceived visually. Regional neural activity was measured using functional MRI while Deaf and hearing native signers of British Sign Language (BSL) detected semantic anomalies in well-formed BSL sentences and when they detected nonsense signs in lists of unconnected BSL signs. Processing BSL sentences, when contrasted with signed lists, was reliably associated with greater activation in the posterior portions of the left middle and superior temporal gyri and in the left inferior frontal cortex, but not in the anterior temporal cortex, which was activated to a similar extent whether lists or sentences were processed. Further support for the specificity of these areas for processing the linguistic-rather than visuospatial-features of signed sentences came from a contrast of hearing native signers and hearing sign-naïve participants. Hearing signers recruited the left posterior temporal and inferior frontal regions during BSL sentence processing to a greater extent than hearing non-signers. These data suggest that these left perisylvian regions are differentially associated with sentence processing, whatever the modality of the linguistic input.

  11. The First Signs of Language: Phonological Development in British Sign Language

    Science.gov (United States)

    Morgan, Gary; Barrett-Jones, Sarah; Stoneham, Helen

    2007-01-01

    A total of 1,018 signs in one deaf child's naturalistic interaction with her deaf mother, between the ages of 19 and 24 months were analyzed. This study summarizes regular modification processes in the phonology of the child sign's handshape, location, movement, and prosody. First, changes to signs were explained by the notion of phonological…

  12. Novel Approach to Use HU Moments with Image Processing Techniques for Real Time Sign Language Communication

    OpenAIRE

    2015-01-01

    Sign language is the fundamental communication method among people who suffer from speech and hearing defects. The rest of the world doesn’t have a clear idea of sign language. “Sign Language Communicator” (SLC) is designed to solve the language barrier between the sign language users and the rest of the world. The main objective of this research is to provide a low cost affordable method of sign language interpretation. This system will also be very useful to the sign language learners as th...

  13. Identifying Specific Language Impairment in Deaf Children Acquiring British Sign Language: Implications for Theory and Practice

    Science.gov (United States)

    Mason, Kathryn; Rowley, Katherine; Marshall, Chloe R.; Atkinson, Joanna R.; Herman, Rosalind; Woll, Bencie; Morgan, Gary

    2010-01-01

    This paper presents the first ever group study of specific language impairment (SLI) in users of sign language. A group of 50 children were referred to the study by teachers and speech and language therapists. Individuals who fitted pre-determined criteria for SLI were then systematically assessed. Here, we describe in detail the performance of 13…

  14. The Onset and Mastery of Spatial Language in Children Acquiring British Sign Language

    Science.gov (United States)

    Morgan, Gary; Herman, Rosalind; Barriere, Isabelle; Woll, Bencie

    2008-01-01

    In the course of language development children must solve arbitrary form-to-meaning mappings, in which semantic components are encoded onto linguistic labels. Because sign languages describe motion and location of entities through iconic movements and placement of the hands in space, child signers may find spatial semantics-to-language mapping…

  15. THE BENEFIT OF EARLY EXPOSURE TO SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Ljubica PRIBANIKJ

    2009-11-01

    Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.

  16. Word order in Russian Sign Language: an extended report

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus (16 minutes) of narratives based on comic strips by 9 native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with 6 native

  17. Deaf People as British Sign Language Teachers: Experiences and Aspirations

    Science.gov (United States)

    Atherton, Martin; Barnes, Lynne

    2012-01-01

    Little research has been undertaken into the profession of British Sign Language (BSL) teaching, despite a huge increase in the number of BSL classes offered over the past twenty years. Following the introduction of Qualified Teacher Learning and Skills standards in 2007, BSL teachers working in "further education" (FE) colleges were…

  18. The Development of Complex Verb Constructions in British Sign Language

    Science.gov (United States)

    Morgan, Gary; Herman, Rosalind; Woll, Bencie

    2002-01-01

    This study focuses on the mapping of events onto verb-argument structures in British Sign Language (BSL). The development of complex sentences in BSL is described in a group of 30 children, aged 3;2-12;0, using data from comprehension measures and elicited sentence production. The findings support two interpretations: firstly, in the mapping of…

  19. Sign Language Recognition by Combining Statistical DTW and Independent Classification

    NARCIS (Netherlands)

    Lichtenauer, J.F.; Hendriks,E.A; Reinders, M.J.T.

    2008-01-01

    To recognize speech, handwriting, or sign language, many hybrid approaches have been proposed that combine Dynamic Time Warping (DTW) or Hidden Markov Models (HMMs) with discriminative classifiers. However, all methods rely directly on the likelihood models of DTW/HMM. We hypothesize that time warpi

  20. Indo-Pakistani Sign Language Grammar: A Typological Outline.

    Science.gov (United States)

    Zeshan, Ulrike

    2003-01-01

    Examines the variety of sign language used in Southern and central Pakistan and Northwestern India, including its grammatical profile, word classes, the relationship between word class and functional slot, the marking of basic syntactic relations, shifters, number systems, types of possession, negation, questions, subordinate clauses, and…

  1. Space and iconicity in German sign language (DGS)

    NARCIS (Netherlands)

    Perniss, Pamela M.

    2007-01-01

    This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and motion

  2. Using Signs to Facilitate Vocabulary in Children with Language Delays

    Science.gov (United States)

    Lederer, Susan Hendler; Battaglia, Dana

    2015-01-01

    The purpose of this article is to explore recommended practices in choosing and using key word signs (i.e., simple single-word gestures for communication) to facilitate first spoken words in hearing children with language delays. Developmental, theoretical, and empirical supports for this practice are discussed. Practical recommendations for…

  3. On Selected Phonological Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Tomita, Nozomi; Kozak, Viola

    2012-01-01

    This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…

  4. Sign Language Culture as Part of Multiculturalism in Hungary

    Science.gov (United States)

    Sarolta, Simigne Fenyo

    2011-01-01

    The objective of the present study is to investigate sign language culture as part of multiculturalism in Hungary. The study consists of two parts. Referring to the 13 national and linguistic minorities living in the territory of Hungary, the first part gives a short account of the narrower interpretation of multiculturalism according to which it…

  5. A human mirror neuron system for language: Perspectives from signed languages of the deaf.

    Science.gov (United States)

    Knapp, Heather Patterson; Corina, David P

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld.

  6. The benefits of sign language for deaf learners with language challenges

    Directory of Open Access Journals (Sweden)

    Gerhard Badenhorst

    2011-08-01

    Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.

  7. Phonological Development in Hearing Learners of a Sign Language: The Influence of Phonological Parameters, Sign Complexity, and Iconicity

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…

  8. Sign Language to Speech Translation System Using PIC Microcontroller

    Directory of Open Access Journals (Sweden)

    Gunasekaran. K

    2013-04-01

    Full Text Available The advancement in embedded system, provides a space to design and develop a sign language translator system to assist the dumb people. This paper mainly addresses to facilitate dumb person's lifestyle. Dumb people throughout the world use sign language to communicate with others, this is possible for those who has undergone special trainings. Common people also face difficult to understand the gesture language. To overcome these real time issues, this system is developed. Whenever the proposed system senses any sign language, it plays corresponding recorded voice. This reduces the communication gap between dumb and ordinary people. This proposed model consist of four modules, they are sensing unit, processing unit, voice storage unit and wireless communication unit. It is achieved by integrating flux sensor and APR9600 with PIC16F877A. The flux sensors are placed in gloves, which respond to gesture. By using suitable circuit response of the sensor is given to the microcontroller based on the response microcontroller plays the recorded voice using APR9600. A snapshot of the entire system, advantage over existing methods and simulation output of the process is discussed in this work. Thissystem offers high reliability and fast response. This method is more precise on hand movement and different languages can be installed without altering the code in PIC microcontroller.

  9. Mapping language to the world: the role of iconicity in the sign language input.

    Science.gov (United States)

    Perniss, Pamela; Lu, Jenny C; Morgan, Gary; Vigliocco, Gabriella

    2017-03-12

    Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts.

  10. Development of Geography and Geology Terminology in British Sign Language

    Science.gov (United States)

    Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel

    2016-04-01

    The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.

  11. Deficits in narrative abilities in child British Sign Language users with specific language impairment.

    Science.gov (United States)

    Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary

    2014-01-01

    This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries.

  12. Deaf Children's Developing Sign Bilingualism: Dimensions Of Language Ability, Use And Awareness

    OpenAIRE

    Swanwick, Ruth Anne

    2000-01-01

    The focus of this study is deafchildren's developing bilingualism in British Sign Language and English (sign bilingualism). Sign bilingualism differs from bilingualism in two spoken languages in that the two languages are differently perceived and produced. This thesis explores individual sign bilingualism focusing on ways in which deaf children use their two languages, their perception of the differences between them and the influences that that two languages have on each other. It is argued...

  13. Children creating language: how Nicaraguan sign language acquired a spatial grammar.

    Science.gov (United States)

    Senghas, A; Coppola, M

    2001-07-01

    It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.

  14. "Ungraceful, Repulsive, Difficult To Comprehend": Sociolinguistic Consideration of Shifts in Signed Languages.

    Science.gov (United States)

    Turner, Graham H.

    1999-01-01

    Focuses on language shift in a signed language in contact with the spoken language. Suggests that British Sign Language, under the influence of spoken English, has witnessed effects such as increased use of finger spelling as well as changes in lexical and function words that reflect spoken/written language structures. (Author/VWL)

  15. Symbiotic symbolization by hand and mouth in sign language.

    Science.gov (United States)

    Sandler, Wendy

    2009-04-01

    Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill's criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths.

  16. Symbiotic symbolization by hand and mouth in sign language*

    Science.gov (United States)

    Sandler, Wendy

    2010-01-01

    Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill’s criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths. PMID:20445832

  17. Quantum Semiotics: A Sign Language for Quantum Mechanics

    CERN Document Server

    Prashant

    2006-01-01

    Semiotics is the language of signs which has been used effectively in various disciplines of human scientific endeavor. It gives a beautiful and rich structure of language to express the basic tenets of any scientific discipline. In this article we attempt to develop from first principles such an axiomatic structure of semiotics for Quantum Mechanics. This would be a further enrichment to the already existing well understood mathematical structure of Quantum Mechanics but may give new insights and understanding to the theory and may help understand more lucidly the fundamentality of Nature which Quantum Theory attempts to explain.

  18. Neural correlates of British sign language comprehension: spatial processing demands of topographic language.

    Science.gov (United States)

    MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J

    2002-10-01

    In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.

  19. Software Junctus: Joining Sign Language and Alphabetical Writing

    Science.gov (United States)

    Valentini, Carla Beatris; Bisol, Cláudia A.; Dalla Santa, Cristiane

    The authors’ aim is to describe the workshops developed to test the use of an authorship program that allows the simultaneous use of sign language and alphabetical writing. The workshops were prepared and conducted by a Computer Science undergraduate, with the support of the Program of Students’ Integration and Mediation (Programa de Integração e Mediação do Acadêmico - PIMA) at the University of Caxias do Sul. Two sign language interpreters, two deaf students and one hearing student, who also teach at a special school for the deaf, participated in the workshops. The main characteristics of the software and the development of the workshops are presented with examples of educational projects created during their development. Possible improvements are also outlined.

  20. Sentential negation in South African Sign Language: A case study

    Directory of Open Access Journals (Sweden)

    Courtney de Barros

    2016-09-01

    Full Text Available As with other sign languages, South African Sign Language (SASL expresses negation using both manual and non-manual features. In this case study, naturalistic data provided by two native signers of SASL are analysed to show the syntactic relationship between these two sets of features. Using a Principles and Parameters approach and Government and Binding Theory, we investigate the syntactic scope of negation in our SASL data. We observe that side-to-side headshake, as a non-manual feature, appears to be the chief clausal negator in SASL, with a clause-final manual negative particle, NOT, playing a secondary role. We describe the negative headshake as a featural affix which is base-generated in the head of NegP and triggers V-to-Neg raising. The negative particle NOT appears to be base-generated in the Specifier of NegP. Suggestions for further syntactic research on SASL are provided.

  1. Segment, Track, Extract, Recognize and Convert Sign Language Videos to Voice/Text

    Directory of Open Access Journals (Sweden)

    P.V.V.Kishore

    2012-06-01

    Full Text Available This paper summarizes various algorithms used to design a sign language recognition system. Sign language is the language used by deaf people to communicate among themselves and with normal people. We designed a real time sign language recognition system that can recognize gestures of sign language from videos under complex backgrounds. Segmenting and tracking of non-rigid hands and head of the signer in sign language videos is achieved by using active contour models. Active contour energy minimization is done using signers hand and head skin colour, texture, boundary and shape information. Classification of signs is done by an artificial neural network using error back propagation algorithm. Each sign in the video is converted into a voice and text command. The system has been implemented successfully for 351 signs of Indian Sign Language under different possible video environments. The recognition rates are calculated for different video environments.

  2. Language Justice for Sign Language Peoples: The UN Convention on the Rights of Persons with Disabilities

    Science.gov (United States)

    Batterbury, Sarah C. E.

    2012-01-01

    Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…

  3. The Power of Deaf Poetry: The Exhibition of Literacy and the Nineteenth-Century Sign Language Debates

    Science.gov (United States)

    Esmail, Jennifer

    2008-01-01

    This article argues that poetry written by nineteenth-century British and American deaf poets played an important role in the period's sign language debates. By placing the publication of this poetry in the context of public exhibitions of deaf students, I suggest that the poetry was mobilized to publicly defend the linguistic and intellectual…

  4. Order of the major constituents in sign languages: implications for all language.

    Science.gov (United States)

    Napoli, Donna Jo; Sutton-Spence, Rachel

    2014-01-01

    A survey of reports of sign order from 42 sign languages leads to a handful of generalizations. Two accounts emerge, one amodal and the other modal. We argue that universal pressures are at work with respect to some generalizations, but that pressure from the visual modality is at work with respect to others. Together, these pressures conspire to make all sign languages order their major constituents SOV or SVO. This study leads us to the conclusion that the order of S with regard to verb phrase (VP) may be driven by sensorimotor system concerns that feed universal grammar.

  5. Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition

    NARCIS (Netherlands)

    Orfanidou, E.; Adam, R.; McQueen, J.M.; Morgan, G.

    2009-01-01

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with differen

  6. A REVIEW ON THE DEVELOPMENT OF INDONESIAN SIGN LANGUAGE RECOGNITION SYSTEM

    OpenAIRE

    Sutarman; Mazlina Abdul Majid; Jasni Mohamad Zain

    2013-01-01

    Sign language is mainly employed by hearing-impaired people to communicate with each other. However, communication with normal people is a major handicap for them since normal people do not understand their sign language. Sign language recognition is needed for realizing a human oriented interactive system that can perform an interaction like normal communication. Sign language recognition basically uses two approaches: (1) computer vision-based gesture recognition, in which a camera is used ...

  7. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon.

    Science.gov (United States)

    Lieberman, Amy M; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I

    2015-07-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf adults who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native signers demonstrated early and robust activation of sublexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received.

  8. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.

    Science.gov (United States)

    Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary

    2016-01-01

    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.

  9. Use of Information and Communication Technologies in Sign Language Test Development: Results of an International Survey

    Science.gov (United States)

    Haug, Tobias

    2015-01-01

    Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…

  10. Medical Signbank as a Model for Sign Language Planning? A Review of Community Engagement

    Science.gov (United States)

    Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor

    2015-01-01

    This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…

  11. How to describe mouth patterns in the Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Boye Niemela, Janne

    2008-01-01

    The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...

  12. Is Teaching Sign Language in Early Childhood Classrooms Feasible for Busy Teachers and Beneficial for Children?

    Science.gov (United States)

    Brereton, Amy Elizabeth

    2010-01-01

    Infants' hands are ready to construct words using sign language before their mouths are ready to speak. These research findings may explain the popularity of parents and caregivers teaching and using sign language with infants and toddlers, along with speech. The advantages of using sign language with young children go beyond the infant and…

  13. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones

    OpenAIRE

    Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Cheryl M Capek; Rudner, Mary

    2016-01-01

    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated ...

  14. Some Handy New Ideas on Pidgins and Creoles: Pidgin Sign Languages.

    Science.gov (United States)

    Woodward, James; Markowicz, Harry

    The study of pidgin and creole languages, usually emphasizing oral language codes, offers insights into language, especially as an observably dynamic phenomenon. However, channel is highly influential on the surface form of the language code. Pidgin sign language codes, not dependent on oral language codes, can serve as an ideal forum for the…

  15. Lexical variation and change in british sign language.

    Directory of Open Access Journals (Sweden)

    Rose Stamp

    Full Text Available This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families. Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region.

  16. Lexical variation and change in british sign language.

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy

    2014-01-01

    This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region.

  17. A Proposed Pedagogical Mobile Application for Learning Sign Language

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2013-01-01

    Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.

  18. Multimodal semantic quantity representations: further evidence from Korean Sign Language

    Directory of Open Access Journals (Sweden)

    Frank eDomahs

    2012-01-01

    Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.

  19. The British Sign Language (BSL) norms for age of acquisition, familiarity, and iconicity.

    Science.gov (United States)

    Vinson, David P; Cormier, Kearsy; Denmark, Tanya; Schembri, Adam; Vigliocco, Gabriella

    2008-11-01

    Research on signed languages offers the opportunity to address many important questions about language that it may not be possible to address via studies of spoken languages alone. Many such studies, however, are inherently limited, because there exist hardly any norms for lexical variables that have appeared to play important roles in spoken language processing. Here, we present a set of norms for age of acquisition, familiarity, and iconicity for 300 British Sign Language (BSL) signs, as rated by deaf signers, in the hope that they may prove useful to other researchers studying BSL and other signed languages. These norms may be downloaded from www.psychonomic.org/archive.

  20. The "SignOn"-Model for Teaching Written Language to Deaf People

    Directory of Open Access Journals (Sweden)

    Marlene Hilzensauer

    2012-08-01

    Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.

  1. An Avatar-Based Italian Sign Language Visualization System

    Science.gov (United States)

    Falletto, Andrea; Prinetto, Paolo; Tiotto, Gabriele

    In this paper, we present an experimental system that supports the translation from Italian to Italian Sign Language (ISL) of the deaf and its visualization through a virtual character. Our objective is to develop a complete platform useful for any application and reusable on several platforms including Web, Digital Television and offline text translation. The system relies on a database that stores both a corpus of Italian words and words coded in the ISL notation system. An interface for the insertion of data is implemented, that allows future extensions and integrations.

  2. Methodological and Theoretical Issues in the Adaptation of Sign Language Tests: An Example from the Adaptation of a Test to German Sign Language

    Science.gov (United States)

    Haug, Tobias

    2012-01-01

    Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…

  3. Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition.

    Science.gov (United States)

    Orfanidou, Eleni; Adam, Robert; McQueen, James M; Morgan, Gary

    2009-04-01

    Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations-that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.

  4. Language Shift, Death, and Maintenance of Native American Languages

    Institute of Scientific and Technical Information of China (English)

    Janie Rees-Miller

    2002-01-01

    @@ When the first English settlers landed in Virginia and New England, they had come to a land that was certainly new for them but had been home to a multitude of Native American groups for thousands of years. It is estimated that at the time of first contact, there were some 300 Native languages spoken in North America and that perhaps 200 are still living languages today. Of these indigenous languages, it is estimated that 175 are still spoken in the United States, although only 20of these languages are being transmitted as a mother tongue to a new generation. In Alaska, for example, of 20 Native languages, only two are being transmitted to children in the home [20].Similarly, in Oklahoma, which is home to 40 distinct indigenous communities, only one has children who speak their ancestral language on a daily basis [23: 112]. The 66 languages of California and Washington State are virtually all moribund, being kept alive by only a few elders;when these elders die, proficient use of the languages will die too. Thus, the gloomy prediction is that within a generation there may be as few as 20 Native languages still spoken as living languages in the US, and even they may be threatened if the present trends of language shift continue [6].

  5. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  6. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    Science.gov (United States)

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  7. Bilingualism (Ancestral Language Maintenance) among Native American, Vietnamese American, and Hispanic American College Students.

    Science.gov (United States)

    Wharry, Cheryl

    1993-01-01

    A survey of 21 Hispanic, 22 Native American, and 10 Vietnamese American college students found that adoption or maintenance of ancestral language was related to attitudes toward ancestral language, beliefs about parental attitudes, and integrative motivation (toward family and ancestral ethnic group). There were significant differences by gender…

  8. Space is special in Sign.

    Science.gov (United States)

    Campbell, Ruth; Woll, Bencie

    2003-01-01

    Following groundbreaking work by linguists and cognitive scientists over the past thirty years, it is now generally accepted that sign languages of the deaf, such as ASL (American Sign Language) or BSL (British Sign Language), are structured and processed in a similar manner to spoken languages. The one striking difference is that they operate in a wholly non-auditory, visuospatial medium. How does the medium impact on language processing itself?

  9. Text generation from Taiwanese Sign Language using a PST-based language model for augmentative communication.

    Science.gov (United States)

    Wu, Chung-Hsien; Chiu, Yu-Hsien; Guo, Chi-Shiang

    2004-12-01

    This paper proposes a novel approach to the generation of Chinese sentences from ill-formed Taiwanese Sign Language (TSL) for people with hearing impairments. First, a sign icon-based virtual keyboard is constructed to provide a visualized interface to retrieve sign icons from a sign database. A proposed language model (LM), based on a predictive sentence template (PST) tree, integrates a statistical variable n-gram LM and linguistic constraints to deal with the translation problem from ill-formed sign sequences to grammatical written sentences. The PST tree trained by a corpus collected from the deaf schools was used to model the correspondence between signed and written Chinese. In addition, a set of phrase formation rules, based on trigger pair category, was derived for sentence pattern expansion. These approaches improved the efficiency of text generation and the accuracy of word prediction and, therefore, improved the input rate. For the assessment of practical communication aids, a reading-comprehension training program with ten profoundly deaf students was undertaken in a deaf school in Tainan, Taiwan. Evaluation results show that the literacy aptitude test and subjective satisfactory level are significantly improved.

  10. Learning/teaching philosophy in sign language as a cultural issue

    Directory of Open Access Journals (Sweden)

    Maria de Fátima Sá Correia

    2013-06-01

    Full Text Available This paper is about the process of learning/teaching philosophy in a class of deaf students. It starts with a presentation of Portuguese Sign Language that, as with other sign languages, is recognized as a language on equal terms with vocal languages. However, in spite of the recognition of that identity, sign languages have specificity related to the quadrimodal way of their production, and iconicity is an exclusive quality. Next, it will be argued that according to linguistic relativism - even in its weak version - language is a mould of thought. The idea of Philosophy is then discussed as an area of knowledge in which the author and the language of its production are always present. Finally, it is argued that learning/teaching Philosophy in Sign Language in a class of deaf students is linked to deaf culture, and it is not merely a way of overcoming difficulties with the spoken language.

  11. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  12. Classifying hand configurations in Nederlandse Gebarentaal (Sign Language of the Netherlands)

    NARCIS (Netherlands)

    Zwitserlood, I.E.P.

    2003-01-01

    This study investigates the morphological and morphosyntactic characteristics of hand configurations in signs, particularly in Nederlandse Gebarentaal (NGT). The literature on sign languages in general acknowledges that hand configurations can function as morphemes, more specifically as classifiers

  13. American Holidays: Culture and Language Learning Combined.

    Science.gov (United States)

    Wylie, Grace Scott

    Suggestions for combining cultural exposure and language instruction through class activities geared to American holidays are outlined. General information about gathering holiday-related realia and instructional materials from local newspapers and magazines is provided, and four specific holidays are highlighted. For each holiday, sources of…

  14. Translation modalities applied to the interpretation in brazilian sign language

    Directory of Open Access Journals (Sweden)

    Silvana Nicoloso

    2015-12-01

    Full Text Available This article was developed from a chapter in a doctoral thesis from the first author, towards a specific focalized discussion, related to the practice of simultaneous interpretation in Brazilian Sign Language, based on translation modalities as proposed by Aubert (1998. The interpreted text is called “Discovering who we are”, extracted from the book Learning to see, by Sherman, Wilcox and Phyllis, and translated by Tarcisio de Arantes Leite. The interpretations were recorded in a media studio, with the official consent from the Ethics Committee for Research with Human Beings at the Federal University of Santa Catarina, Brazil and the data was analyzed by means of the software ELAN. Results indicate that using a research method which considers translation modalities may contribute to obtain a clearer view regarding the similarities and differences between the selected linguistic pairs.

  15. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI)

    OpenAIRE

    Øhre, Beate; Saltnes, Hege; Tetzchner,Stephen von; Falkum, Erik

    2014-01-01

    Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic...

  16. Variation in handshape and orientation in British Sign Language: The case of the '1' hand configuration.

    Science.gov (United States)

    Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy

    2013-01-01

    This paper investigates phonological variation in British Sign Language (BSL) signs produced with a '1' hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed.

  17. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    Science.gov (United States)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  18. Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hee-Deok Yang

    2014-12-01

    Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  19. Sign language recognition with the Kinect sensor based on conditional random fields.

    Science.gov (United States)

    Yang, Hee-Deok

    2014-12-24

    Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D) space. In this research, we use 3D depth information from hand motions, generated from Microsoft's Kinect sensor and apply a hierarchical conditional random field (CRF) that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  20. Sign language ability in young deaf signers predicts comprehension of written sentences in English.

    Science.gov (United States)

    Andrew, Kathy N; Hoshooley, Jennifer; Joanisse, Marc F

    2014-01-01

    We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.

  1. Sign language ability in young deaf signers predicts comprehension of written sentences in English.

    Directory of Open Access Journals (Sweden)

    Kathy N Andrew

    Full Text Available We investigated the robust correlation between American Sign Language (ASL and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.

  2. The effect of sign language structure on complex word reading in Chinese deaf adolescents.

    Science.gov (United States)

    Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X

    2015-01-01

    The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.

  3. The British Sign Language Variant of Stokoe Notation: Report on a Type-Design Project.

    Science.gov (United States)

    Thoutenhoofd, Ernst

    2003-01-01

    Explores the outcome of a publicly-funded research project titled "Redesign of the British Sign Language (BSL) Notation System with a New Font for Use in ICT." The aim of the project was to redesign the British Sign Language variant of Stokoe notation for practical use in information technology systems and software, such as lexical…

  4. The Link between Form and Meaning in British Sign Language: Effects of Iconicity for Phonological Decisions

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2010-01-01

    Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…

  5. Atypical speech and language development: a consensus study on clinical signs in the Netherlands

    NARCIS (Netherlands)

    Visser-Bochane, Margot I.; Gerrits, Ellen; Schans, Cees P. van der; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2016-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech–language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and

  6. Atypical speech and language development : a consensus study on clinical signs in the Netherlands

    NARCIS (Netherlands)

    Visser-Bochane, Margot I; Gerrits, Ellen; van der Schans, Cees P; Reijneveld, Sijmen A; Luinge, Margreet R

    2016-01-01

    BACKGROUND: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. AIM: To achieve a national and valid consensus on clinical signs and

  7. Interpreter's Wrist: Repetitive Stress Injury and Carpal Tunnel Syndrome in Sign Language Interpreters.

    Science.gov (United States)

    Stedt, Joe D.

    1992-01-01

    In a survey concerning repetitive stress injury (RSI) and carpal tunnel syndrome, 87 percent of the 40 sign language interpreters reported that they had at some time experienced at least 2 symptoms associated with RSI, and most interpreters knew others with RSI problems. Data indicate that RSI is a severe problem among sign language interpreters.…

  8. Lexical Properties of Slovene Sign Language: A Corpus-Based Study

    Science.gov (United States)

    Vintar, Špela

    2015-01-01

    Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…

  9. Constructing an Online Test Framework, Using the Example of a Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias; Herman, Rosalind; Woll, Bencie

    2015-01-01

    This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…

  10. The non- (existent) native signer: sign language research in a small deaf population

    NARCIS (Netherlands)

    Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,

    2008-01-01

    This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born in

  11. Shared Thinking Processes with Four Deaf Poets: A Window on "the Creative" in "Creative Sign Language"

    Science.gov (United States)

    West, Donna; Sutton-Spence, Rachel

    2012-01-01

    This article discusses a new way of thinking about analyzing sign-language poetry. Rather than merely focusing on the product, the method involves observing the process of its creation. Recent years have witnessed increasing literary and linguistic analysis of sign-language poetry, with commentaries on texts and performances being set within and…

  12. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  13. Towards a Transcription System of Sign Language for 3D Virtual Agents

    Science.gov (United States)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  14. Impact of bilingual experiences on language inhibition ability:Evidence from English-Chinese unimodal and English-American sign language bimodal bilinguals%第二语言水平对双语者语言抑制能力的影响--来自英语-汉语单通道双语者和英语-美国手语双通道双语者的证据

    Institute of Scientific and Technical Information of China (English)

    李恒; 曹宇

    2016-01-01

    -Chinese) and bimodal (English-American Sign Language) bilinguals with both low and high L2 proficiency. In Experiment 1 and 2, a homograph interference task was used to investigate bilingual advantage in conflict resolution during sentence processing. Participants were asked to read a sentence ending with a homograph (e.g.,He walked along the bank.) and then judge if a target word (e.g., RIVER or MONEY) matched the meaning of the sentence they just read. Although the target word (e.g., MONEY) is semantically related to one meaning of the homograph (bank: a financial institution), it is not the meaning supported by the sentence context (e.g.,He walked along) and, consequently, this alternative meaning must be suppressed in order to correctly respond “no”. Thus, a measure of homograph interference can be computed by comparing the mean RT for the target words semantically relevant to the sentences or not. Experiment 1 showed that the unimodal bilinguals with higher L2 (Chinese) proficiency outperformed the unimodal bilinguals with lower L2 proficiency and the monolinguals on the homograph interference task that required resolving conflict from competing alternative meanings. In addition, there was no difference between the unimodal bilinguals with lower L2 proficiency and the monolinguals. In Experiment 2, there was no performance difference in the homograph interference task between the bimodal bilinguals with higher L2 (American Sign Language) proficiency, the bimodal bilinguals with lower L2 proficiency and the monolinguals. Taken together, the results across the two experiments indicate that both L2 modality and L2 proficiency are mediating factors of bilingual advantage effect. According to the results of the two experiments, one possible explanation for this enhancement of language inhibitory ability in unimodal bilinguals is that the regular use of two languages requires a mechanism to select the target language and inhibit the non-target language—an experience that

  15. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases.

    Science.gov (United States)

    Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland

    2015-05-12

    According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.

  16. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  17. Using formal logic to represent sign language phonetics in semi-automatic annotation tasks

    OpenAIRE

    Curiel Diaz, Arturo Tlacaélel

    2015-01-01

    This thesis presents a formal framework for the representation of Signed Languages (SLs), the languages of Deaf communities, in semi-automatic recognition tasks. SLs are complex visio-gestural communication systems; by using corporal gestures, signers achieve the same level of expressivity held by sound-based languages like English or French. However, unlike these, SL morphemes correspond to complex sequences of highly specific body postures, interleaved with postural changes: during signing,...

  18. Hand and mouth: cortical correlates of lexical processing in British Sign Language and speechreading English.

    Science.gov (United States)

    Capek, Cheryl M; Waters, Dafydd; Woll, Bencie; MacSweeney, Mairéad; Brammer, Michael J; McGuire, Philip K; David, Anthony S; Campbell, Ruth

    2008-07-01

    Spoken languages use one set of articulators -- the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used functional magnetic resonance imaging to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common peri-sylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the temporo-parieto-occipital junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different types of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, whereas signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign but also show sensitivity to the different articulators within the (signed) language.

  19. A sign-component-based framework for Chinese sign language recognition using accelerometer and sEMG data.

    Science.gov (United States)

    Li, Yun; Chen, Xiang; Zhang, Xu; Wang, Kongqiao; Wang, Z Jane

    2012-10-01

    Identification of constituent components of each sign gesture can be beneficial to the improved performance of sign language recognition (SLR), especially for large-vocabulary SLR systems. Aiming at developing such a system using portable accelerometer (ACC) and surface electromyographic (sEMG) sensors, we propose a framework for automatic Chinese SLR at the component level. In the proposed framework, data segmentation, as an important preprocessing operation, is performed to divide a continuous sign language sentence into subword segments. Based on the features extracted from ACC and sEMG data, three basic components of sign subwords, namely the hand shape, orientation, and movement, are further modeled and the corresponding component classifiers are learned. At the decision level, a sequence of subwords can be recognized by fusing the likelihoods at the component level. The overall classification accuracy of 96.5% for a vocabulary of 120 signs and 86.7% for 200 sentences demonstrate the feasibility of interpreting sign components from ACC and sEMG data and clearly show the superior recognition performance of the proposed method when compared with the previous SLR method at the subword level. The proposed method seems promising for implementing large-vocabulary portable SLR systems.

  20. Recognition of signed and spoken language: Different sensory inputs, the same segmentation procedure

    NARCIS (Netherlands)

    Orfanidou, E.; Adam, R.; Morgan, G.; McQueen, J.M.

    2010-01-01

    Signed languages are articulated through simultaneous upper-body movements and are seen; spoken languages are articulated through sequential vocal-tract movements and are heard. But word recognition in both language modalities entails segmentation of a continuous input into discrete lexical units. A

  1. Robust Sign Language Recognition System Using ToF Depth Cameras

    CERN Document Server

    Zahedi, Morteza

    2011-01-01

    Sign language recognition is a difficult task, yet required for many applications in real-time speed. Using RGB cameras for recognition of sign languages is not very successful in practical situations and accurate 3D imaging requires expensive and complex instruments. With introduction of Time-of-Flight (ToF) depth cameras in recent years, it has become easier to scan the environment for accurate, yet fast depth images of the objects without the need of any extra calibrating object. In this paper, a robust system for sign language recognition using ToF depth cameras is presented for converting the recorded signs to a standard and portable XML sign language named SiGML for easy transferring and converting to real-time 3D virtual characters animations. Feature extraction using moments and classification using nearest neighbor classifier are used to track hand gestures and significant result of 100% is achieved for the proposed approach.

  2. A Kinect based sign language recognition system using spatio-temporal features

    Science.gov (United States)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  3. Teachers' perceptions of promoting sign language phonological awareness in an ASL/English bilingual program.

    Science.gov (United States)

    Crume, Peter K

    2013-10-01

    The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.

  4. Anthropomorphism in Sign Languages: A Look at Poetry and Storytelling with a Focus on British Sign Language

    Science.gov (United States)

    Sutton-Spence, Rachel; Napoli, Donna Jo

    2010-01-01

    The work presented here considers some linguistic methods used in sign anthropomorphism. We find a cline of signed anthropomorphism that depends on a number of factors, including the skills and intention of the signer, the animacy of the entities represented, the form of their bodies, and the form of vocabulary signs referring to those entities.…

  5. Vital Signs: The State of African Americans in Higher Education.

    Science.gov (United States)

    Cross, Theodore L.; And Others

    1994-01-01

    Presents a statistical record of the progress of African Americans in institutions of higher education in the United States. Statistics include trends in black enrollment, library resources in historically black colleges, leading foundation grants, blacks in business schools, and comparative analysis of Asian Americans and blacks in higher…

  6. Deaf interpreter of brazilian signs language: the new field translation / interpretation and its cultural challenge

    Directory of Open Access Journals (Sweden)

    Ana Regina e Souza Campello

    2014-07-01

    Full Text Available This article is the result of research that deals with the new mode of translation / interpretation of sign language interpreters Deaf, observing the deaf norm (STONE, 2009 apud SOUZA, 2010. The translation and interpretation of the actor / translator and interpreter and finally a sign language sign language to another (SEGALA, 2010; SOUZA, 2010. Recently, this new field of translation emerged in the educational context of distance education. These activities Translation and interpretation have been performed by bilingual Deaf intermodal. Exactly as it represents a new field of study, this article presents its constitution.

  7. A dynamic gesture recognition system for the Korean sign language (KSL).

    Science.gov (United States)

    Kim, J S; Jang, W; Bien, Z

    1996-01-01

    The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition.

  8. Motives and Outcomes of New Zealand Sign Language Legislation: A Comparative Study between New Zealand and Finland

    Science.gov (United States)

    Reffell, Hayley; McKee, Rachel Locker

    2009-01-01

    The medicalized interpretation of deafness has until recently seen the rights and protections of sign language users embedded in disability law. Yet the rights and protections crucial to sign language users centre predominantly on matters of language access, maintenance and identity. Legislators, motivated by pressure from sign language…

  9. Lexical prediction via forward models: N400 evidence from German Sign Language.

    Science.gov (United States)

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension.

  10. PROPOSING A LANGUAGE EXPERIENCE AND SELF-ASSESSMENT OF PROFICIENCY QUESTIONNAIRE FOR BILINGUAL BRAZILIAN SIGN LANGUAGE/PORTUGUESE HEARING TEACHERS

    Directory of Open Access Journals (Sweden)

    Ingrid FINGER

    2014-12-01

    Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.

  11. 手语翻译与聋人文化%Sign Language and Deaf Culture

    Institute of Scientific and Technical Information of China (English)

    张敏

    2014-01-01

    聋校课堂和媒体中出现的手语翻译多是对汉语的翻译,即使用的是文法手语,而现实的情况却是大多数的聋人对手语翻译的信息接收不畅。文章旨在论述手语翻译与聋人文化的关系,认为手语翻译是手语和汉语之间的信息转换,是两种文化的交流与碰撞。%Sign language interpretation is mainly the interpretation of Chinese in the classrooms of schools for the deaf and in the media. In spite of the use of grammatical sign language, in reality, most deaf people can not receive the information of sign language interpretation easily and smoothly. This paper discusses the relationship between sign language and deaf culture and holds that sign language interpretation is the information transfer between sign language and the Chinese language and the exchange and collision of two kinds of culture.

  12. Sign Language Recognition Based on Position and Movement Using Multi-Stream HMM

    Science.gov (United States)

    Nishida, Masafumi; Maebatake, Masaru; Suzuki, Iori; Horiuchi, Yasuo; Kuroiwa, Shingo

    To establish a universal communication environment, computer systems should recognize various modal communication languages. In conventional sign language recognition, recognition is performed by the word unit using gesture information of hand shape and movement. In the conventional studies, each feature has same weight to calculate the probability for the recognition. We think hand position is very important for sign language recognition, since the implication of word differs according to hand position. In this study, we propose a sign language recognition method by using a multi-stream HMM technique to show the importance of position and movement information for the sign language recognition. We conducted recognition experiments using 28,200 sign language word data. As a result, 82.1 % recognition accuracy was obtained with the appropriate weight (position:movement=0.2:0.8), while 77.8 % was obtained with the same weight. As a result, we demonstrated that it is necessary to put weight on movement than position in sign language recognition.

  13. Detecting Cognitive Impairment and Dementia in Deaf People: The British Sign Language Cognitive Screening Test.

    Science.gov (United States)

    Atkinson, Joanna; Denmark, Tanya; Marshall, Jane; Mummery, Cath; Woll, Bencie

    2015-11-01

    To provide accurate diagnostic screening of deaf people who use signed communication, cognitive tests must be devised in signed languages with normative deaf samples. This article describes the development of the first screening test for the detection of cognitive impairment and dementia in deaf signers. The British Sign Language Cognitive Screening Test uses standardized video administration to screen cognition using signed, rather than spoken or written, instructions and a large norm-referenced sample of 226 deaf older people. Percentiles are provided for clinical comparison. The tests showed good reliability, content validity, and correlation with age, intellectual ability, and education. Clinical discrimination was shown between the normative sample and 14 deaf patients with dementia. This innovative testing approach transforms the ability to detect dementia in deaf people, avoids the difficulties of using an interpreter, and enables culturally and linguistically sensitive assessment of deaf signers, with international potential for adaptation into other signed languages.

  14. Independent transmission of sign language interpreter in DVB: assessment of image compression

    Science.gov (United States)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  15. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  16. Categorical perception of face actions: their role in sign language and in communicative facial displays.

    Science.gov (United States)

    Campbell, R; Woll, B; Benson, P J; Wallace, S B

    1999-02-01

    Can face actions that carry significance within language be perceived categorically? We used continua produced by computational morphing of face-action images to explore this question in a controlled fashion. In Experiment 1 we showed that question--type--a syntactic distinction in British Sign Language (BSL)--can be perceived categorically, but only when it is also identified as a question marker. A few hearing non-signers were sensitive to this distinction; among those who used sign, late sign learners were no less sensitive than early sign users. A very similar facial-display continuum between "surprise" and "puzzlement" was perceived categorically by deaf and hearing participants, irrespective of their sign experience (Experiment 2). The categorical processing of facial displays can be demonstrated for sign, but may be grounded in universally perceived distinctions between communicative face actions. Moreover, the categorical perception of facial actions is not confined to the six universal facial expressions.

  17. Unsilencing Voices: A Study of Zoo Signs and Their Language of Authority

    Science.gov (United States)

    Fogelberg, Katherine

    2014-01-01

    Zoo signs are important for informal learning, but their effect on visitor perception of animals has been sparsely studied. Other studies have established the importance of informal learning in American society; this study discusses zoo signs in the context of such learning. Through the lens of Critical Theory framed by informal learning, and by…

  18. The English-Language and Reading Achievement of a Cohort of Deaf Students Speaking and Signing Standard English: A Preliminary Study.

    Science.gov (United States)

    Nielsen, Diane Corcoran; Luetke, Barbara; McLean, Meigan; Stryker, Deborah

    2016-01-01

    Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies. A developmental trend (and no plateau) in language and reading achievement was detected; most participants demonstrated average or above-average English. Morphological awareness was prerequisite to high test scores; speech was not significantly correlated with achievement; language proficiency, measured by the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), predicted reading achievement.

  19. Deaf Children Attending Different School Environments: Sign Language Abilities and Theory of Mind

    Science.gov (United States)

    Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia

    2013-01-01

    The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf…

  20. Schoolization: An Account of the Origins of Regional Variation in British Sign Language

    Science.gov (United States)

    Quinn, Gary

    2010-01-01

    British Sign Language has a number of regional variations. This article examines the role of residential schools in the development of sign variants. Citing data collected during interviews with members of the Lancaster and Morecambe Deaf community (who of necessity attended schools elsewhere), it explores the peer-to-peer transmission of sign…

  1. Testing Comprehension Abilities in Users of British Sign Language Following Cva

    Science.gov (United States)

    Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.

    2005-01-01

    Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…

  2. Music and Sign Language to Promote Infant and Toddler Communication and Enhance Parent-Child Interaction

    Science.gov (United States)

    Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne

    2014-01-01

    The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…

  3. Structure of the Brazilian Sign Language (Libras) for Computational Tools: Citizenship and Social Inclusion

    Science.gov (United States)

    Guimaraes, Cayley; Antunes, Diego R.; de F. Guilhermino Trindade, Daniela; da Silva, Rafaella A. Lopes; Garcia, Laura Sanchez

    This work presents a computational model (XML) of the Brazilian Sign Language (Libras), based on its phonology. The model was used to create a sample of representative signs to aid the recording of a base of videos whose aim is to support the development of tools to support genuine social inclusion of the deaf.

  4. Computer-Assisted Learning in British Sign Language

    Science.gov (United States)

    Mertzani, Maria

    2011-01-01

    The fact that language teaching can be operationalized through computer-assisted language learning (CALL) has directed researchers' attention to the learning task, which, in this case, is considered to be the unit that demands analysis of the communicative processes in which the learner is involved while working with CALL. Research focuses on…

  5. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language.

  6. Asian American Youth Language Use: Perspectives across Schools and Communities

    Science.gov (United States)

    Shankar, Shalini

    2011-01-01

    Recent studies of Asian American youth language practices have presented compelling insights about the identities and migration experiences of young people of Asian descent. This article offers a detailed examination of the relationship between language use and select issues concerning Asian American youth, including social life, schooling,…

  7. EXTENSION OF HIDDEN MARKOV MODEL FOR RECOGNIZING LARGE VOCABULARY OF SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Maher Jebali

    2013-03-01

    Full Text Available Computers still have a long way to go before they can interact with users in a truly natural fashion. From a user’s perspective, the most natural way to interact with a computer would be through a speech and gesture interface. Although speech recognition has made significant advances in the past ten years, gesture recognition has been lagging behind. Sign Languages (SL are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest in the Automatic Natural Language Processing (ANLP domain. In this work, we are dealing with sign language recognition, in particular of French Sign Language (FSL. FSL has its own specificities, such as the simultaneity of several parameters, the important role of the facial expression or movement and the use of space for the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL events occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than the spoken languages. We present a novel approach based on HMM to reduce the recognition complexity.

  8. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    Science.gov (United States)

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems.

  9. The signs B and B-bent in Israeli sign language according to the theory of Phonology as Human Behavior.

    Science.gov (United States)

    Fuks, Orit; Tobin, Yishai

    2008-01-01

    The purpose of the present research is to examine which of the two factors: (1) the iconic-semiotic factor; or (2) the human-phonetic factor is more relevant in explaining the appearance and distribution of the hand shape B-bent in Israeli Sign Language (ISL). The B-bent shape has been the subject of much attention in sign language research revolving around the question of its status as a phoneme. The arguments supporting the phonemic status of the B-bent hand shape have been primarily based on the semiotic opposition between the hand shape B and the hand shape B-bent. It has been claimed that in Italian Sign Language the hand shape B is perceptually distinct from the hand shape B-bent, i.e. in opposition to the general, neutral, unmarked meaning of the hand shape B, the iconic hand shape B-bent has a more narrow, specific and marked meaning: DELIMIT. The B-bent hand shape appears in spatial-temporal signs such as "a little before, ahead, postpone or behind". In these signs the iconic structure of the hand shape B-bent is utilized to mark borders in space and time. The arguments opposing the perceptual/phonemic distinction between these hand shapes is based on the human-phonetic factor, i.e. the need to reduce the effort on the part of the wrist joints in specific phonetic environments. We performed a quantitative and qualitative content analysis of the distribution of the basic units of 560 lexical signs taken from a stratified random sample from the ISL dictionary. The results were analyzed in the framework of the sign-oriented linguistic theory of the Columbia School including the theory of Phonology as Human Behavior. Our data revealed that the B-bent hand shape--as all the "building blocks" of the ISL--is a morpho-phonemic unit. We found that there is not only a phonemic distinction between hand shape B and hand shape B-bent in ISL (based on minimal pairs), but there is also a perceptual distinction between them. The qualitative analysis shows that the

  10. Role of sign language in intellectual and social development of deaf children: Review of foreign publications

    Directory of Open Access Journals (Sweden)

    Khokhlova A. Yu.

    2014-12-01

    Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.

  11. Assessment of Sign Language Development: The Case of Deaf Children in the Netherlands

    NARCIS (Netherlands)

    Hermans, D.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2009-01-01

    In this article, we will describe the development of an assessment instrument for Sign Language of the Netherlands (SLN) for deaf children in bilingual education programs. The assessment instrument consists of nine computerized tests in which the receptive and expressive language skills of deaf chil

  12. The impact of developmental visuospatial learning difficulties on British Sign Language.

    Science.gov (United States)

    Atkinson, J R; Woll, B; Gathercole, S

    2002-01-01

    There has been substantial research interest in recent years in the relationship between the development of language and cognition, especially where dissociations can be seen between them. Williams syndrome, a rare congenital disorder characterized by a fractionation of higher cortical functions, with relatively preserved language but marked difficulties with visuospatial constructive cognition, has been extensively studied. The case of Heather, who is remarkably similar to the characteristic phenotype of Williams syndrome in physical appearance and cognitive abilities, but who is also congenitally deaf and a user of British Sign Language, provides the first opportunity to explore the consequences of specific visuospatial learning difficulties on the linguistic system when the language used is visuospatial. Heather shows a pattern of impaired drawing ability and visual form discrimination, but preserved ability to discriminate faces. She has a large vocabulary in British Sign Language, and overall presents a picture of relative competence in British Sign Language grammar. However, she shows specific deficits in those areas of British Sign Language which directly rely on spatial representations for linguistic purposes. A number of theories as to the nature of her impairments and those found in Williams syndrome are discussed, using models of the relationship between language and visuospatial cognition based on data from this unique case.

  13. Lexical Organization in Deaf Children Who Use British Sign Language: Evidence from a Semantic Fluency Task

    Science.gov (United States)

    Marshall, Chloe R.; Rowley, Katherine; Mason, Kathryn; Herman, Rosalind; Morgan, Gary

    2013-01-01

    We adapted the semantic fluency task into British Sign Language (BSL). In Study 1, we present data from twenty-two deaf signers aged four to fifteen. We show that the same "cognitive signatures" that characterize this task in spoken languages are also present in deaf children, for example, the semantic clustering of responses. In Study…

  14. A Barking Dog That Never Bites? The British Sign Language (Scotland) Bill

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…

  15. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  16. Automatic Mexican sign language and digits recognition using normalized central moments

    Science.gov (United States)

    Solís, Francisco; Martínez, David; Espinosa, Oscar; Toxqui, Carina

    2016-09-01

    This work presents a framework for automatic Mexican sign language and digits recognition based on computer vision system using normalized central moments and artificial neural networks. Images are captured by digital IP camera, four LED reflectors and a green background in order to reduce computational costs and prevent the use of special gloves. 42 normalized central moments are computed per frame and used in a Multi-Layer Perceptron to recognize each database. Four versions per sign and digit were used in training phase. 93% and 95% of recognition rates were achieved for Mexican sign language and digits respectively.

  17. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    Science.gov (United States)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  18. Visual sign phonology: insights into human reading and language from a natural soundless phonology.

    Science.gov (United States)

    Petitto, L A; Langdon, C; Stone, A; Andriola, D; Kartheiser, G; Cochran, C

    2016-11-01

    Among the most prevailing assumptions in science and society about the human reading process is that sound and sound-based phonology are critical to young readers. The child's sound-to-letter decoding is viewed as universal and vital to deriving meaning from print. We offer a different view. The crucial link for early reading success is not between segmental sounds and print. Instead the human brain's capacity to segment, categorize, and discern linguistic patterning makes possible the capacity to segment all languages. This biological process includes the segmentation of languages on the hands in signed languages. Exposure to natural sign language in early life equally affords the child's discovery of silent segmental units in visual sign phonology (VSP) that can also facilitate segmental decoding of print. We consider powerful biological evidence about the brain, how it builds sound and sign phonology, and why sound and sign phonology are equally important in language learning and reading. We offer a testable theoretical account, reading model, and predictions about how VSP can facilitate segmentation and mapping between print and meaning. We explain how VSP can be a powerful facilitator of all children's reading success (deaf and hearing)-an account with profound transformative impact on learning to read in deaf children with different language backgrounds. The existence of VSP has important implications for understanding core properties of all human language and reading, challenges assumptions about language and reading as being tied to sound, and provides novel insight into a remarkable biological equivalence in signed and spoken languages. WIREs Cogn Sci 2016, 7:366-381. doi: 10.1002/wcs.1404 For further resources related to this article, please visit the WIREs website.

  19. Nearest neighbour classification of Indian sign language gestures using kinect camera

    Indian Academy of Sciences (India)

    Zafar Ahmed Ansari; Gaurav Harit

    2016-02-01

    People with speech disabilities communicate in sign language and therefore have trouble in mingling with the able-bodied. There is a need for an interpretation system which could act as a bridge between them and those who do not know their sign language. A functional unobtrusive Indian sign language recognition system was implemented and tested on real world data. A vocabulary of 140 symbols was collected using 18 subjects, totalling 5041 images. The vocabulary consisted mostly of two-handed signs which were drawn from a wide repertoire of words of technical and daily-use origins. The system was implemented using Microsoft Kinect which enables surrounding light conditions and object colour to have negligible effect on the efficiency of the system. The system proposes a method for a novel, low-cost and easy-to-use application, for Indian Sign Language recognition, using the Microsoft Kinect camera. In the fingerspelling category of our dataset, we achieved above 90% recognition rates for 13 signs and 100% recognition for 3 signs with overall 16 distinct alphabets (A, B, D, E, F, G, H, K, P, R, T, U, W, X, Y, Z) recognised with an average accuracy rate of 90.68%.

  20. Language Ideologies and Bilingual Education: A Korean-American Perspective

    Science.gov (United States)

    Jeon, Mihyon

    2007-01-01

    This paper is an ethnographic record of an ongoing journey during which I have tried to understand the kinds of language ideologies that my students and I have constructed about the Korean language. My students are mainly Korean-American university students who have never successfully achieved native fluency in their heritage language, although…

  1. On language acquisition in speech and sign:development drives combinatorial structure in both modalities

    Directory of Open Access Journals (Sweden)

    Gary eMorgan

    2014-11-01

    Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.

  2. Assessing language skills in adult key word signers with intellectual disabilities: Insights from sign linguistics.

    Science.gov (United States)

    Grove, Nicola; Woll, Bencie

    2017-03-01

    Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children.

  3. Cross-linguistic differences in the neural representation of human language: evidence from users of signed languages.

    Science.gov (United States)

    Corina, David P; Lawyer, Laurel A; Cates, Deborah

    2012-01-01

    Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.

  4. Cross-linguistic differences in the neural representation of human language: evidence from users of signed languages.

    Directory of Open Access Journals (Sweden)

    David eCorina

    2013-01-01

    Full Text Available Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language,; core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.

  5. Sign language recognition using intrinsic-mode sample entropy on sEMG and accelerometer data.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios J

    2009-12-01

    Sign language forms a communication channel among the deaf; however, automated gesture recognition could further expand their communication with the hearers. In this work, data from five-channel surface electromyogram and 3-D accelerometer from the signer's dominant hand were analyzed using intrinsic-mode entropy (IMEn) for the automated recognition of Greek sign language (GSL) isolated signs. Discriminant analysis was used to identify the effective scales of the intrinsic-mode functions and the window length for the calculation of the IMEn that contributes to the efficient classification of the GSL signs. Experimental results from the IMEn analysis applied to GSL signs corresponding to 60-word lexicon repeated ten times by three native signers have shown more than 93% mean classification accuracy using IMEn as the only source of the classification feature set. This provides a promising bed-set toward the automated GSL gesture recognition.

  6. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    Science.gov (United States)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  7. Referential shift in Nicaraguan Sign Language: a transition from lexical to spatial devices

    Science.gov (United States)

    Kocab, Annemarie; Pyers, Jennie; Senghas, Ann

    2015-01-01

    Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions. PMID:25713541

  8. Referential shift in Nicaraguan Sign Language: a transition from lexical to spatial devices.

    Science.gov (United States)

    Kocab, Annemarie; Pyers, Jennie; Senghas, Ann

    2014-01-01

    Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.

  9. Health care accessibility and the role of sign language interpreters

    NARCIS (Netherlands)

    B. van den Bogaerde; R. de Lange

    2014-01-01

    In healthcare, the accuracy of interpretation is the most critical component of safe and effective communication between providers and patients in medical settings characterized by language and cultural barriers. Although medical education should prepare healthcare providers for common issues they w

  10. The Importance of Early Sign Language Acquisition for Deaf Readers

    Science.gov (United States)

    Clark, M. Diane; Hauser, Peter C.; Miller, Paul; Kargin, Tevhide; Rathmann, Christian; Guldenoglu, Birkan; Kubus, Okan; Spurgeon, Erin; Israel, Erica

    2016-01-01

    Researchers have used various theories to explain deaf individuals' reading skills, including the dual route reading theory, the orthographic depth theory, and the early language access theory. This study tested 4 groups of children--hearing with dyslexia, hearing without dyslexia, deaf early signers, and deaf late signers (N = 857)--from 4…

  11. Where Can You See Language Contact between English and British Sign Language? The Use of the Manual Alphabet in Place-Names and BSL.

    Science.gov (United States)

    Sutton-Spence, Rachel

    Just as minority spoken languages borrow from surrounding majority languages, so British Sign Language (BSL) borrows signs from English. BSL may borrow from both spoken and written English, but here we focus on the processes involved in borrowing from the written English word, using the manual alphabet. The end result of borrowing depends on an…

  12. Toward the Ideal Signing Avatar

    Directory of Open Access Journals (Sweden)

    Nicoletta Adamo-Villani

    2016-06-01

    Full Text Available The paper discusses ongoing research on the effects of a signing avatar's modeling/rendering features on the perception of sign language animation. It reports a recent study that aimed to determine whether a character's visual style has an effect on how signing animated characters are perceived by viewers. The stimuli of the study were two polygonal characters presenting two different visual styles: stylized and realistic. Each character signed four sentences. Forty-seven participants with experience in American Sign Language (ASL viewed the animated signing clips in random order via web survey. They (1 identified the signed sentences (if recognizable, (2 rated their legibility, and (3 rated the appeal of the signing avatar. Findings show that while character's visual style does not have an effect on subjects' perceived legibility of the signs and sign recognition, it has an effect on subjects' interest in the character. The stylized signing avatar was perceived as more appealing than the realistic one.

  13. A Topological derivative based image segmentation for sign language recognition system using isotropic filter

    CERN Document Server

    Krishnaveni, M

    2010-01-01

    The need of sign language is increasing radically especially to hearing impaired community. Only few research groups try to automatically recognize sign language from video, colored gloves and etc. Their approach requires a valid segmentation of the data that is used for training and of the data that is used to be recognized. Recognition of a sign language image sequence is challenging because of the variety of hand shapes and hand motions. Here, this paper proposes to apply a combination of image segmentation with restoration using topological derivatives for achieving high recognition accuracy. Image quality measures are conceded here to differentiate the methods both subjectively as well as objectively. Experiments show that the additional use of the restoration before segmenting the postures significantly improves the correct rate of hand detection, and that the discrete derivatives yields a high rate of discrimination between different static hand postures as well as between hand postures and the scene b...

  14. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  15. Depictions and minifiction: a reflection on translation of micro-story as didactics of sign language interpreters training in colombia.

    OpenAIRE

    Alex Giovanny Barreto; Román Santiago Artunduaga

    2015-01-01

    The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL) and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distanc...

  16. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.

    Science.gov (United States)

    Hall, Wyatte C

    2017-02-09

    A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.

  17. Deaf children attending different school environments: sign language abilities and theory of mind.

    Science.gov (United States)

    Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia

    2013-01-01

    The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.

  18. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    Science.gov (United States)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  19. Translation and interpretation of sign language in the postgraduate context: problematizing positions

    Directory of Open Access Journals (Sweden)

    Luiz Daniel Rodrigues Dinarte

    2015-12-01

    Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.

  20. The use of graphic representations of sign language in leveled texts to support deaf readers.

    Science.gov (United States)

    Hoffman, Mary; Wang, Ye

    2010-01-01

    The study considered whether adding sign language graphics to the books being used for reading instruction in a first-grade classroom would promote the literacy development of students who are deaf or hard of hearing. The researchers also sought to discover whether materials existed to put the process of modifying leveled texts within the reach of the typical classroom teacher, in terms of cost and procedure. Students' reading behaviors seemed to indicate that the presence of sign graphics supported their development as readers. The materials needed to create sign support for the English print in the leveled books were commercially available.

  1. Mexican sign language recognition using normalized moments and artificial neural networks

    Science.gov (United States)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  2. First Language Acquisition Differs from Second Language Acquisition in Prelingually Deaf Signers: Evidence from Sensitivity to Grammaticality Judgement in British Sign Language

    Science.gov (United States)

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-01-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of…

  3. Bilingualism and Language Contact: Spanish, English, and Native American Languages. Bilingual Education Series.

    Science.gov (United States)

    Barkin, Florence, Ed.; And Others

    Spanish, English, and American Indian languages in the southwestern United States and northern Mexico and bilingualism and language contact in the region are addressed in a collection of articles. Approaches to research in the languages of this region are discussed in articles by Valdes, Lope Blanch, and Brandt. Cultural and sociolinguistic…

  4. Application of demand-control theory to sign language interpreting: implications for stress and interpreter training.

    Science.gov (United States)

    Dean, R K; Pollard, R Q

    2001-01-01

    The translation work of sign language interpreters involves much more than language. The characteristics and goings-on in the physical environment, the dynamics and interactions between the people who are present, and even the "inner noise" of the interpreter contribute to the accuracy, or lack thereof, of the resulting translation. The competent interpreter must understand and respond appropriately to the language and nonlanguage aspects of each interpreting assignment. We use the framework of demand-control theory (Karasek, 1979) to examine the complex occupation of sign language interpreting. Demand-control theory is a job analysis method useful in studies of occupational stress and reduction of stress-related illness, injury, and burnout. We describe sources of demand in the interpreting profession, including demands that arise from factors other than those associated with languages (linguistic demands). These include environmental, interpersonal, and intrapersonal demands. Karasek's concept of control, or decision latitude, is also explored in relation to the interpreting profession. We discuss the prevalence of cumulative trauma disorders (CTD), turnover, and burnout in the interpreting profession in light of demand-control theory and data from interpreter surveys, including a new survey study described herein. We conclude that nonlinguistic demand factors in particular and perceived restrictions in decision latitude likely contribute to stress, CTD, burnout, and the resulting shortage of sign language interpreters. We make suggestions for improvements in interpreter education and professional development, including the institution of an advanced, supervised professional training period, modeled after internships common in other high demand professional occupations.

  5. Variation in handshape and orientation in British Sign Language: The case of the ‘1’ hand configuration

    Science.gov (United States)

    Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy

    2013-01-01

    This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018

  6. Prior knowledge of deaf students fluent in brazilian sign languages regarding the algebraic language in high school

    Directory of Open Access Journals (Sweden)

    Silvia Teresinha Frizzarini

    2014-06-01

    Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.

  7. Unit 1101: Language Varies by Place: American English.

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Center for Curriculum Development in English.

    This 11th-grade language unit focuses on dialectology, the regional variations of American English, and the causes for the differences and similarities in language usage in the United States. Issues surveyed in the unit are (1) the historical basis for dialect differences from the time of the early colonists, (2) current speech characteristics of…

  8. The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners

    Science.gov (United States)

    Lu, Jenny; Jones, Anna; Morgan, Gary

    2016-01-01

    There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…

  9. Advances to the development of a basic Mexican sign-to-speech and text language translator

    Science.gov (United States)

    Garcia-Bautista, G.; Trujillo-Romero, F.; Diaz-Gonzalez, G.

    2016-09-01

    Sign Language (SL) is the basic alternative communication method between deaf people. However, most of the hearing people have trouble understanding the SL, making communication with deaf people almost impossible and taking them apart from daily activities. In this work we present an automatic basic real-time sign language translator capable of recognize a basic list of Mexican Sign Language (MSL) signs of 10 meaningful words, letters (A-Z) and numbers (1-10) and translate them into speech and text. The signs were collected from a group of 35 MSL signers executed in front of a Microsoft Kinect™ Sensor. The hand gesture recognition system use the RGB-D camera to build and storage data point clouds, color and skeleton tracking information. In this work we propose a method to obtain the representative hand trajectory pattern information. We use Euclidean Segmentation method to obtain the hand shape and Hierarchical Centroid as feature extraction method for images of numbers and letters. A pattern recognition method based on a Back Propagation Artificial Neural Network (ANN) is used to interpret the hand gestures. Finally, we use K-Fold Cross Validation method for training and testing stages. Our results achieve an accuracy of 95.71% on words, 98.57% on numbers and 79.71% on letters. In addition, an interactive user interface was designed to present the results in voice and text format.

  10. The Changing Context for Sign Bilingual Education Programs: Issues in Language and the Development of Literacy

    Science.gov (United States)

    Mayer, Connie; Leigh, Greg

    2010-01-01

    The widespread implementation of newborn hearing screening and advances in amplification technologies (including cochlear implants) have fundamentally changed the educational landscape for deaf learners. These changes are discussed in terms of their impact on sign bilingual education programs with a focus on the relationships between language and…

  11. Sign Language Users' Education and Employment Levels: Keeping Pace with Changes in the General Australian Population?

    Science.gov (United States)

    Willoughby, Louisa

    2011-01-01

    This article draws on data from the 2006 Australian census to explore the education and employment outcomes of sign languages users living in Victoria, Australia, and to compare them with outcomes reported in the general population. Census data have the advantage of sampling the entire population on the one night, avoiding problems of population…

  12. Hand in Hand--Progression Routes in British Sign Language in the KS5 Curriculum

    Science.gov (United States)

    Harnisch, Henriette; Smith, Craig; Dekesel, Kristiaan

    2004-01-01

    This article discusses the development of the Hand in Hand curriculum of the Black Country Pathfinder Networks for Excellence program. The purpose of the project is to create a British Sign Language (BSL) course suitable to delivery in the mainstream curriculum. The first phase focuses primarily on the creation of an outline of the project and a…

  13. Learning & Using British Sign Language: Current Skills & Training of Hearing Professionals.

    Science.gov (United States)

    Kyle, James G.; And Others

    1981-01-01

    Two aspects of a study of the problems hearing people have in acquiring British Sign Language (BSL) are described: (1) the measurement of current skills in BSL of professionals in the field, and (2) current training programs in BSL in the United Kingdom and results of some controlled teaching situations. (Author/PJM)

  14. Do Signers Understand Regional Varieties of a Sign Language? A Lexical Recognition Experiment

    Science.gov (United States)

    Stamp, Rose

    2016-01-01

    The degree of mutual intelligibility of British Sign Language (BSL) regional varieties has been a subject of some debate. Recent research in which dyads of signers from contrasting regional backgrounds engaged in a conversational task showed no problems understanding one another. The present study investigated signers' knowledge of different BSL…

  15. Sign Language as Medium of Instruction in Botswana Primary Schools: Voices from the Field

    Science.gov (United States)

    Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly

    2015-01-01

    This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…

  16. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    Science.gov (United States)

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  17. Determining aspects of text difficulty for the Sign Language of the Netherlands (NGT) Functional Assessment instrument

    NARCIS (Netherlands)

    Broek, Annieck van den; Boers-Visker, Eveline; Bogaerde, Beppie van den

    2014-01-01

    In this paper we describe our work in progress on the development of a set of criteria to predict text difficulty in Sign Language of the Netherlands (NGT). These texts are used in a four year bachelor program, which is being brought in line with the Common European Framework of Reference for Langua

  18. Sign-supported Dutch in children with severe speech and language impairments : A multiple case study

    NARCIS (Netherlands)

    Wijkamp, I.; Gerritsen, B.; Bonder, F.; Haisma, H.H.; van der Schans, C.P

    2010-01-01

    In the Netherlands, many educators and care providers working at special schools for children with severe speech and language impairments (SSLI) use sign-supported Dutch (SSD) to facilitate communication. Anecdotal experiences suggest positive results, but empirical evidence is lacking. In this mult

  19. Complexity constrained rate-distortion optimization of sign language video using an objective intelligibility metric

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2008-01-01

    Sign language users are eager for the freedom and convenience of video communication over cellular devices. Compression of sign language video in this setting offers unique challenges. The low bitrates available make encoding decisions extremely important, while the power constraints of the device limit the encoder complexity. The ultimate goal is to maximize the intelligibility of the conversation given the rate-constrained cellular channel and power constrained encoding device. This paper uses an objective measure of intelligibility, based on subjective testing with members of the Deaf community, for rate-distortion optimization of sign language video within the H.264 framework. Performance bounds are established by using the intelligibility metric in a Lagrangian cost function along with a trellis search to make optimal mode and quantizer decisions for each macroblock. The optimal QP values are analyzed and the unique structure of sign language is exploited in order to reduce complexity by three orders of magnitude relative to the trellis search technique with no loss in rate-distortion performance. Further reductions in complexity are made by eliminating rarely occuring modes in the encoding process. The low complexity SL optimization technique increases the measured intelligibility up to 3.5 dB, at fixed rates, and reduces rate by as much as 60% at fixed levels of intelligibility with respect to a rate control algorithm designed for aesthetic distortion as measured by MSE.

  20. Where "Sign Language Studies" Has Led Us in Forty Years: Opening High School and University Education for Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation

    Science.gov (United States)

    Woodward, James; Hoa, Nguyen Thi

    2012-01-01

    This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…

  1. Kinect-based sign language recognition of static and dynamic hand movements

    Science.gov (United States)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  2. Evaluation of language and communication skills in adult key word signing users with intellectual disability: advantages of a narrative task.

    Science.gov (United States)

    Meuris, Kristien; Maes, Bea; Zink, Inge

    2014-10-01

    The evaluation of language and communication skills in adults who use augmentative and alternative communication (AAC) in general and key word signing (KWS) in particular, can be an elaborate task. Besides being time-consuming and not very similar to natural communication, standard language tests often do not take AAC or KWS into account. Therefore, we developed a narrative task specifically for adults with intellectual disability (ID) who use KWS. The task was evaluated in a group of 40 adult KWS users. Outcome measures on the narrative task correlated significantly with measures of standard language and communication tests for verbal language, but not for use of manual signs. All narrative measures, for both verbal language and manual signing, correlated highly with similar measures from a conversation sample. The developed narrative task proved useful and valid to evaluate the language and communication skills of adults with ID taking into account both their verbal language and manual sign use.

  3. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad

    2015-12-01

    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands.

  4. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    Science.gov (United States)

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  5. A Case Study of Two Sign Language Interpreters Working in Post-Secondary Education in New Zealand

    Science.gov (United States)

    Powell, Denise

    2013-01-01

    A case study of two qualified New Zealand Sign Language interpreters working in a post-secondary education setting in New Zealand was undertaken using both qualitative and quantitative methods. Educational sign language interpreting at the post-secondary level requires a different set of skills and is a reasonably new development in New Zealand.…

  6. Development of communication and speech skills after cochlear implant in a sign language child.

    Science.gov (United States)

    Cassandro, E; Nicastri, M; Chiarella, G; Genovese, E; Gallo, L V; Catalano, M

    2003-04-01

    In selecting patients to undergo cochlear implant, a pre-existing use of sign language gives rise to two problems that have been widely debated in the literature. First, the caution shown toward the candidacy of patients using this mode of communication, since it is considered a possible element of interference in the acquisition of speech. Secondly, refusal of the cochlear implant procedure, on the part of the deaf community, on the grounds both of cultural identity and of it being more "natural" for a deaf person to use an unimpaired visual channel rather than an impaired hearing channel. In order to establish whether knowledge of sign language does, indeed, affect speech production negatively and evaluate which mode of communication, oral or gestual, is preferred, the present investigation was carried out on a preverbal deaf child who had undergone cochlear implant at about 7 years of age and has always used both languages. His verbal skills were evaluated in the precochlear implant stage, then at 6 and 12 months after, together with the changes in his use of sign language and in the relationship between the two modes. Results, besides observing the presence of linguistic evolution at each level examined and already evident at 6 months, also documented a progressive reduction in the spontaneous use of sign language. In conclusion, the present experience revealed no temporal or qualitative differences in post-cochlear implant evolution of speech skills, in comparison with that observed in patients with an exclusively aural-oral approach. Furthermore, the increased use of the hearing pathway, made possible by cochlear implant, determined a spontaneous choice of verbal language as the most natural and economic mode of communication.

  7. Engaging the Discourse of International Language Recognition through ISO 639-3 Signed Language Change Requests

    Science.gov (United States)

    Parks, Elizabeth

    2015-01-01

    Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…

  8. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  9. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  10. Early Vocabulary Development in Deaf Native Signers: A British Sign Language Adaptation of the Communicative Development Inventories

    Science.gov (United States)

    Woolfe, Tyron; Herman, Rosalind; Roy, Penny; Woll, Bencie

    2010-01-01

    Background: There is a dearth of assessments of sign language development in young deaf children. This study gathered age-related scores from a sample of deaf native signing children using an adapted version of the MacArthur-Bates CDI (Fenson et al., 1994). Method: Parental reports on children's receptive and expressive signing were collected…

  11. CDC Vital Signs–Native Americans With Diabetes

    Centers for Disease Control (CDC) Podcasts

    2017-01-10

    This podcast is based on the January 2017 CDC Vital Signs report. Diabetes is the leading cause of kidney failure and Native Americans have a greater chance of having diabetes than any other racial group in the U.S. Learn how to manage your diabetes to delay or prevent kidney failure.  Created: 1/10/2017 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP).   Date Released: 1/10/2017.

  12. Korean American College Students' Language Practices and Identity Positioning: "Not Korean, but Not American"

    Science.gov (United States)

    Kang, Hyun-Sook

    2013-01-01

    This article explores the intersection between language practices and ethnic identity for 8 second-generation Korean American learners who were participating in a Korean-as-a-foreign-language (KFL) class at a U.S. university. This study aims to examine the fluid nature of ethnic identity by examining how Korean heritage learners negotiate,…

  13. Meso-American Languages: An Investigation of Variety, Maintenance, and Implications for Linguistic Survival

    Directory of Open Access Journals (Sweden)

    Ransom Gladwin

    2014-01-01

    Full Text Available Forty-five Meso-American language speakers, speaking thirteen Meso-American languages, were interviewed in the agricultural region in and around Colquitt County, Georgia. Language shift is common among such displaced immigrant populations (Fishman, 1967, specifically among less-dominant languages (Paulstone, 1994 such as Meso-American languages. The study used oral survey methods to record demographic data concerning MesoAmerican language speakers and the diversity of Meso-American languages spoken. The interviewers surveyed the use of and attitudes towards English, Meso-American languages, and Spanish among the speakers and their children (or hypothetical children. These responses documented language links to dominant socio-economic forces and generational language maintenance. The findings, such as differing rates for desired vs. reported language maintenance among the population, contribute to the national picture of Meso-American language maintenance among Meso-American speakers in the United States. The results of the study predict a gradual Meso-American language shift, but there was strong sentimentality for Meso-American languages. Such findings present implications to help stabilize language shift for those in common contact with Meso-American speakers in the United States, such as teachers and health workers, as well as MesoAmerican speakers themselves.

  14. Neural systems underlying British Sign Language and audio-visual English processing in native users.

    Science.gov (United States)

    MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; McGuire, Philip K; David, Anthony S; Williams, Steven C R; Suckling, John; Calvert, Gemma A; Brammer, Michael J

    2002-07-01

    In order to understand the evolution of human language, it is necessary to explore the neural systems that support language processing in its many forms. In particular, it is informative to separate those mechanisms that may have evolved for sensory processing (hearing) from those that have evolved to represent events and actions symbolically (language). To what extent are the brain systems that support language processing shaped by auditory experience and to what extent by exposure to language, which may not necessarily be acoustically structured? In this first neuroimaging study of the perception of British Sign Language (BSL), we explored these questions by measuring brain activation using functional MRI in nine hearing and nine congenitally deaf native users of BSL while they performed a BSL sentence-acceptability task. Eight hearing, non-signing subjects performed an analogous task that involved audio-visual English sentences. The data support the argument that there are both modality-independent and modality-dependent language localization patterns in native users. In relation to modality-independent patterns, regions activated by both BSL in deaf signers and by spoken English in hearing non-signers included inferior prefrontal regions bilaterally (including Broca's area) and superior temporal regions bilaterally (including Wernicke's area). Lateralization patterns were similar for the two languages. There was no evidence of enhanced right-hemisphere recruitment for BSL processing in comparison with audio-visual English. In relation to modality-specific patterns, audio-visual speech in hearing subjects generated greater activation in the primary and secondary auditory cortices than BSL in deaf signers, whereas BSL generated enhanced activation in the posterior occipito-temporal regions (V5), reflecting the greater movement component of BSL. The influence of hearing status on the recruitment of sign language processing systems was explored by comparing deaf

  15. The emergence of embedded structure: insights from Kafr Qasem Sign Language.

    Science.gov (United States)

    Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana

    2014-01-01

    This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step-and the crucial step-toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means.

  16. Effects of Hearing Status and Sign Language Use on Working Memory.

    Science.gov (United States)

    Marschark, Marc; Sarchet, Thomastine; Trani, Alexandra

    2016-04-01

    Deaf individuals have been found to score lower than hearing individuals across a variety of memory tasks involving both verbal and nonverbal stimuli, particularly those requiring retention of serial order. Deaf individuals who are native signers, meanwhile, have been found to score higher on visual-spatial memory tasks than on verbal-sequential tasks and higher on some visual-spatial tasks than hearing nonsigners. However, hearing status and preferred language modality (signed or spoken) frequently are confounded in such studies. That situation is resolved in the present study by including deaf students who use spoken language and sign language interpreting students (hearing signers) as well as deaf signers and hearing nonsigners. Three complex memory span tasks revealed overall advantages for hearing signers and nonsigners over both deaf signers and deaf nonsigners on 2 tasks involving memory for verbal stimuli (letters). There were no differences among the groups on the task involving visual-spatial stimuli. The results are consistent with and extend recent findings concerning the effects of hearing status and language on memory and are discussed in terms of language modality, hearing status, and cognitive abilities among deaf and hearing individuals.

  17. A Human Mirror Neuron System for Language: Perspectives from Signed Languages of the Deaf

    Science.gov (United States)

    Knapp, Heather Patterson; Corina, David P.

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…

  18. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  19. Intrinsic mode entropy: an enhanced classification means for automated Greek Sign Language gesture recognition.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios J

    2008-01-01

    Sign language forms a communication channel among the deaf; however, automated gesture recognition could further expand their communication with the hearers. In this work, data from three-dimensional accelerometer and five-channel surface electromyogram of the user's dominant forearm are analyzed using intrinsic mode entropy (IMEn) for the automated recognition of Greek Sign Language (GSL) gestures. IMEn was estimated for various window lengths and evaluated by the Mahalanobis distance criterion. Discriminant analysis was used to identify the effective scales of the intrinsic mode functions and the window length for the calculation of the IMEn that contributes to the correct classification of the GSL gestures. Experimental results from the IMEn analysis of GSL gestures corresponding to ten words have shown 100% classification accuracy using IMEn as the only classification feature. This provides a promising bed-set towards the automated GSL gesture recognition.

  20. Comprehending Sentences With the Body: Action Compatibility in British Sign Language?

    Science.gov (United States)

    Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella

    2016-08-03

    Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion is only implied. We find no evidence of action simulation in BSL comprehension (Experiments 1-3), but we find effects of action simulation in comprehension of written English sentences by deaf native BSL signers (Experiment 4). These results provide constraints on the nature of mental simulations involved in comprehending action sentences referring to transfer events, suggesting that the richer contextual information provided by BSL sentences versus written or spoken English may reduce the need for action simulation in comprehension, at least when the event described does not map completely onto the signer's own body.

  1. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  2. EXTENSION OF HIDDEN MARKOV MODEL FOR RECOGNIZING LARGE VOCABULARY OF SIGN LANGUAGE

    OpenAIRE

    Maher Jebali; Mohamed Jemni

    2013-01-01

    Computers still have a long way to go before they can interact with users in a truly natural fashion. From a user’s perspective, the most natural way to interact with a computer would be through a speech and gesture interface. Although speech recognition has made significant advances in the past ten years, gesture recognition has been lagging behind. Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challen...

  3. Sign Language Glove:可以“说出”你的手语

    Institute of Scientific and Technical Information of China (English)

    2016-01-01

    对于那些有听力或者语言障碍的人来说,手语就是他们与世界沟通的全部。但如果遇到不懂手语的人,这与世界沟通的唯一纽带就断掉了。这款Sign Language Glove智能手套其实是一个巨大的传感器集合。

  4. Fingerspelling, signed language, text and picture processing in deaf native signers: the role of the mid-fusiform gyrus.

    Science.gov (United States)

    Waters, Dafydd; Campbell, Ruth; Capek, Cheryl M; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J; MacSweeney, Mairéad

    2007-04-15

    In fingerspelling, different hand configurations are used to represent the different letters of the alphabet. Signers use this method of representing written language to fill lexical gaps in a signed language. Using fMRI, we compared cortical networks supporting the perception of fingerspelled, signed, written, and pictorial stimuli in deaf native signers of British Sign Language (BSL). In order to examine the effects of linguistic knowledge, hearing participants who knew neither fingerspelling nor a signed language were also tested. All input forms activated a left fronto-temporal network, including portions of left inferior temporal and mid-fusiform gyri, in both groups. To examine the extent to which activation in this region was influenced by orthographic structure, two contrasts of orthographic and non-orthographic stimuli were made: one using static stimuli (text vs. pictures), the other using dynamic stimuli (fingerspelling vs. signed language). Greater activation in left and right inferior temporal and mid-fusiform gyri was found for pictures than text in both deaf and hearing groups. In the fingerspelling vs. signed language contrast, a significant interaction indicated locations within the left and right mid-fusiform gyri. This showed greater activation for fingerspelling than signed language in deaf but not hearing participants. These results are discussed in light of recent proposals that the mid-fusiform gyrus may act as an integration region, mediating between visual input and higher-order stimulus properties.

  5. Teaching sign language in gaucho schools for deaf people: a study of curricula

    Directory of Open Access Journals (Sweden)

    Carolina Hessel Silveira

    2013-06-01

    Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.

  6. An Investigation into the Relationship of Foreign Language Learning Motivation and Sign Language Use among Deaf and Hard of Hearing Hungarians

    Science.gov (United States)

    Kontra, Edit H.; Csizer, Kata

    2013-01-01

    The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…

  7. Automatic Isolated-Word Arabic Sign Language Recognition System Based on Time Delay Neural Networks

    Directory of Open Access Journals (Sweden)

    Feras Fares Al Mashagba

    2014-03-01

    Full Text Available There have been a little number of attempts to develop an Arabic sign recognition system that can be used as a communication means between hearing-impaired and other people. This study introduces the first automatic isolated-word Arabic Sign Language (ArSL recognition system based on Time Delay Neural Networks (TDNN. The proposed vision-based recognition system that the user wears two simple but different colors gloves when performing the signs in the data sets within this study. The two colored regions are recognized and highlighted within each frame in the video to help in recognizing the signs. This research uses the multivariate Gaussian Mixture Model (GMM based on the characteristics of the well known Hue Saturation Lightness Model (HIS in determining the colors within the video frames. In this research the mean and covariance of the three colored region within the frames are determined and used to help us in segmenting each frame (picture into two colored regions and outlier region. Finally we propose, create and use the following four features as an input to the TDNN; the centroid position for each hand using the center of the upper area for each frame as references, the change in horizontal velocity of both hands across the frames, the change in vertical velocity of both hands across the frames and the area change for each hand across the frames. A large set of samples has been used to recognize 40 isolated words coded by 10 different signers from the Standard Arabic sign language signs. Our proposed system obtains a word recognition rate of 70.0% in testing set.

  8. Sign of the Times: American Sign Language in Contour Line Drawing

    Science.gov (United States)

    Tamplin de Poinsot, Nan

    2009-01-01

    There is no denying that human hands throughout art history have been alluring subjects for artists. Think of Michelangelo's portrayal of God and Adam's graceful hands on the Sistine Chapel ceiling, reaching out longingly to each other. In German Expressionist artist Egon Schiele's portraits, the long, bony fingers of his models' hands seem to…

  9. The verbal-visual discourse in Brazilian Sign Language – Libras

    Directory of Open Access Journals (Sweden)

    Tanya Felipe

    2013-11-01

    Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.

  10. A Language Challenge to the Hispanic American.

    Science.gov (United States)

    Nino, Miguel A.

    The Hispanic-American, because he or she is bilingual and bicultural, could play an important role in the future economic development of the United States. Declines in steel, automotive, and electronics industries due to foreign competition and market saturation have caused industrial displacement and unemployment. The Maquiladora or Twin Plant…

  11. Pan-American Teletandem Language Exchange Project

    Science.gov (United States)

    Castillo-Scott, Aurora

    2015-01-01

    This paper describes a TeleTandem language exchange project between English speaking Spanish students at Georgia College, USA, and Spanish speaking English students at Universidad de Concepción, Chile. The aim of the project was to promote linguistic skills and intercultural competence through a TeleTandem exchange. Students used Skype and Google…

  12. THE BIBLE LANGUAGE IN THE AMERICAN LYRIC

    Directory of Open Access Journals (Sweden)

    Bruno Rosario Candelier

    2015-04-01

    Full Text Available The footprint of the Bible in its intellectual and aesthetic expression is manifested in the creation of poetry and fiction. The religious and mystical poetry, and the use of biblical language through the recreation of characters, themes or motifs inspired by the sacred text, are a tribute to the Holy Book  and a creative vein of literature inspired by this paradigmatic work of our culture. The biblical language that channel profound teachings and revealed truths through diverse literary figures, has been a fruitful means of creation. Besides intuition and inspiration, in the poetic language flowing the signals of revelation that synthesize perception of consciousness, the metaphysics slope of the existing and the effluvia of Transcendence. In its implementation intervenes the creative power of poetry that the word formalized in images, myths and concepts. In numerous poetic creations there are formal, conceptual and spiritual reminiscent of the Holy Book. It’s prolific the trace of the Bible in literature, culture and spiritual awareness. The word that creates and raises is a melting pot of the aesthetic feeling and spirituality. In fact, the Gospel contains the inspiring principle of Christian mystical literature. By focusing biblical language in poetic creation, we appreciate literary formulas and compositional resources. There is a wisdom and a stylistic inherent in biblical language, which manifests itself in a biblical tone, a biblical image and a biblical technique that the language arts formalized in various forms of creation. Knowing from the biblical heritage is reflected in judgments, prophetic visions, parables, allegories, parallelisms and other resources that have fallen into the lyrical flow. The biblical language embodies a format registered by proverbs, hymns, prayers, metaphors and other expressive resources format. In the biblical text we find various literary forms that have fueled the substance of poetic creation, as

  13. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics.

  14. Suspending the next turn as a form of repair initiation: evidence from Argentine Sign Language

    Directory of Open Access Journals (Sweden)

    Elizabeth eManrique

    2015-09-01

    Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.

  15. Suspending the next turn as a form of repair initiation: evidence from Argentine Sign Language.

    Science.gov (United States)

    Manrique, Elizabeth; Enfield, N J

    2015-01-01

    Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA). We describe a type of response termed a "freeze-look," which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a "thinking" face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The "freeze-look" results in the questioner "re-doing" their action of asking a question, for example by repeating or rephrasing it. Thus, we argue that the "freeze-look" is a practice for other-initiation of repair. In addition, we argue that it is an "off-record" practice, thus contrasting with known on-record practices such as saying "Huh?" or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.

  16. ANFIS Based Methodology for Sign Language Recognition and Translating to Number in Kannada Language

    Directory of Open Access Journals (Sweden)

    Ramesh Mahadev kagalkar

    2017-03-01

    Full Text Available In the world of signing and gestures, lots of analysis work has been done over the past three decades. This has led to a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a restricted vocabulary. In gift state of affairs, human machine interactive systems facilitate communication between the deaf, and hearing impaired in universe things. So as to boost the accuracy of recognition, several researchers have deployed strategies like HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The most purpose of this paper is to investigate these strategies and to effectively compare them, which can alter the reader to succeed in associate in nursing optimum resolution. This creates each, challenges and opportunities for signing recognition connected analysis. Normal 0 false false false DE JA X-NONE

  17. "What's the Sign for 'Catch 22'?": Barriers to Professional Formation for Deaf Teachers of British Sign Language in the Further Education Sector

    Science.gov (United States)

    Barnes, Lynne; Atherton, Martin

    2015-01-01

    In 2007, Qualified Teacher Learning and Skills standards (QTLS) were introduced for all teachers working in UK further education institutions, with the expressed aim of improving professional standards within the sector. British Sign Language (BSL) teaching is largely delivered by deaf native signers through evening classes at local FE colleges,…

  18. Senales de Trafico. Ingles-Espanol = Traffic Signs. English-Spanish [and] English-Spanish Road Signs for American Tourists.

    Science.gov (United States)

    Grosse, Philip

    Two English/Spanish bilingual glossaries define words and phrases found on traffic signs. The first is an extensive alphabetical checklist of sign messages, listed in English with translations in Spanish. Some basic traffic and speed limit rules are included. The second volume, in Spanish-to-English form, is a pocket version designed for American…

  19. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  20. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    Science.gov (United States)

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  1. Functional connectivity in task-negative network of the Deaf: effects of sign language experience.

    Science.gov (United States)

    Malaia, Evie; Talavage, Thomas M; Wilbur, Ronnie B

    2014-01-01

    Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain's anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia). We report the first investigation of the task-negative network in Deaf signers and its functional connectivity-the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG), but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

  2. Functional connectivity in task-negative network of the Deaf: effects of sign language experience

    Directory of Open Access Journals (Sweden)

    Evie Malaia

    2014-06-01

    Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

  3. [Are deaf patients in Germany informed about their legal rights for a sign language interpreter?].

    Science.gov (United States)

    Höcker, J T; Letzel, S; Münster, E

    2012-12-01

    Deaf citizens are confronted with barriers in a health-care system shaped by hearing people. Therefore the German legislature provides a supply with sign language interpreters at the expense of the health insurances. The present study initially examines in how far the deaf are informed about this and use said interpreters. Traditional surveys are based on spoken and written language and therefore are unsuitable for the target audience. Because of this, a cross-sectional online study was performed using sign language videos and visually oriented answers to allow a barrier-free participation. With a multivariate analysis, factors increasing deaf people's risks not to be informed of the supply with interpreters were identified: Of 841 deaf participants, 31.4% were not informed of their rights. 41.3% have experience with an interpreter at the doctor's and report a mainly trouble-free reimbursement of costs. Young and modestly educated deaf have a higher risk of not being informed of the interpreter supply. Further information is necessary to provide equality of opportunities to deaf patients utilising medical benefits.

  4. The hands and mouth do not always slip together in British sign language: dissociating articulatory channels in the lexicon.

    Science.gov (United States)

    Vinson, David P; Thompson, Robin L; Skinner, Robert; Fox, Neil; Vigliocco, Gabriella

    2010-08-01

    In contrast to the single-articulatory system of spoken languages, sign languages employ multiple articulators, including the hands and the mouth. We asked whether manual components and mouthing patterns of lexical signs share a semantic representation, and whether their relationship is affected by the differing language experience of deaf and hearing native signers. We used picture-naming tasks and word-translation tasks to assess whether the same semantic effects occur in manual production and mouthing production. Semantic errors on the hands were more common in the English-translation task than in the picture-naming task, but errors in mouthing patterns showed a different trend. We conclude that mouthing is represented and accessed through a largely separable channel, rather than being bundled with manual components in the sign lexicon. Results were comparable for deaf and hearing signers; differences in language experience did not play a role. These results provide novel insight into coordinating different modalities in language production.

  5. Depictions and minifiction: a reflection on translation of micro-story as didactics of sign language interpreters training in colombia.

    Directory of Open Access Journals (Sweden)

    Alex Giovanny Barreto

    2015-12-01

    Full Text Available The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distance of Colombia-UNAD. The didactic proposal focus on the model of the efforts (Gile, specifically in the production and listen efforts. A criticism about translating competence is presented. Minifiction is literary genre with multiple semiotic and philosophical translation possibilities. These literary texts have elements with great potential to render on visual, gestural and spatial depictions of Colombian sign language which is profitable to interpreter training and education. Through El Dinosaurio sign language translation, we concludes with an outline and reflections on the pedagogical and didactic potential of minifiction and depictions in the design of training activities in sign language interpreters.

  6. The question of sign-language and the utility of signs in the instruction of the deaf: two papers by Alexander Graham Bell (1898).

    Science.gov (United States)

    Bell, Alexander Graham

    2005-01-01

    Alexander Graham Bell is often portrayed as either hero or villain of deaf individuals and the Deaf community. His writings, however, indicate that he was neither, and was not as clearly definite in his beliefs about language as is often supposed. The following two articles, reprinted from The Educator (1898), Vol. V, pp. 3-4 and pp. 38-44, capture Bell's thinking about sign language and its use in the classroom. Contrary to frequent claims, Bell does not demand "oral" training for all deaf children--even if he thinks it is the superior alternative--but does advocate for it for "the semi-deaf" and "the semi-mute." "In regard to the others," he writes, "I am not so sure." Although he clearly voices his support for oral methods and fingerspelling (the Rochester method) over sign language, Bell acknowledges the use and utility of signing in a carefully-crafted discussion that includes both linguistics and educational philosophy. In separating the language used at home from that in school and on the playground, Bell reveals a far more complex view of language learning by deaf children than he is often granted. (M. Marschark).

  7. Unsilencing voices: a study of zoo signs and their language of authority

    Science.gov (United States)

    Fogelberg, Katherine

    2014-12-01

    Zoo signs are important for informal learning, but their effect on visitor perception of animals has been sparsely studied. Other studies have established the importance of informal learning in American society; this study discusses zoo signs in the context of such learning. Through the lens of Critical Theory framed by informal learning, and by applying critical discourse analysis, I discovered subtle institutional power on zoo signs. This may influence visitors through dominant ideological discursive formations and emergent discourse objects, adding to the paradox of "saving" wild animals while simultaneously oppressing them. Signs covering a variety of species from two different United States-accredited zoos were analyzed. Critical Theory looks to emancipate oppressed human populations; here I apply it zoo animals. As physical emancipation is not practical, I define emancipation in the sociological sense—in this case, freedom from silence. Through this research, perhaps we can find a way to represent animals as living beings who have their own lives and voices, by presenting them honestly, with care and compassion.

  8. Do Signers Understand Regional Varieties of a Sign Language? A Lexical Recognition Experiment.

    Science.gov (United States)

    Stamp, Rose

    2016-01-01

    The degree of mutual intelligibility of British Sign Language (BSL) regional varieties has been a subject of some debate. Recent research in which dyads of signers from contrasting regional backgrounds engaged in a conversational task showed no problems understanding one another. The present study investigated signers' knowledge of different BSL regional varieties. Twenty-five participants from Belfast, Glasgow, Manchester, and Newcastle took part in a computer-based lexical recognition task in which they had to identify the meaning of 47 color signs from various regions in the United Kingdom. The results indicate that overall signers have a poor knowledge of regional signs for colors when signs are presented in isolation and without mouthing. Furthermore, signers with deaf parents performed better in the recognition task than signers with hearing parents and varieties from London and Birmingham were easiest to recognize. This article discusses how signers cope with regional differences and considers the features that facilitate in the recognition of regional varieties in BSL.

  9. First language acquisition differs from second language acquisition in prelingually deaf signers: evidence from sensitivity to grammaticality judgement in British Sign Language.

    Science.gov (United States)

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-07-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life.

  10. Using sample entropy for automated sign language recognition on sEMG and accelerometer data.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios I

    2010-03-01

    Communication using sign language (SL) provides alternative means for information transmission among the deaf. Automated gesture recognition involved in SL, however, could further expand this communication channel to the world of hearers. In this study, data from five-channel surface electromyogram and three-dimensional accelerometer from signers' dominant hand were subjected to a feature extraction process. The latter consisted of sample entropy (SampEn)-based analysis, whereas time-frequency feature (TFF) analysis was also performed as a baseline method for the automated recognition of 60-word lexicon Greek SL (GSL) isolated signs. Experimental results have shown a 66 and 92% mean classification accuracy threshold using TFF and SampEn, respectively. These results justify the superiority of SampEn against conventional methods, such as TFF, to provide with high recognition hit-ratios, combined with feature vector dimension reduction, toward a fast and reliable automated GSL gesture recognition.

  11. Examining the contribution of motor movement and language dominance to increased left lateralization during sign generation in native signers.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Payne, Heather; MacSweeney, Mairéad

    2016-08-01

    The neural systems supporting speech and sign processing are very similar, although not identical. In a previous fTCD study of hearing native signers (Gutierrez-Sigut, Daws, et al., 2015) we found stronger left lateralization for sign than speech. Given that this increased lateralization could not be explained by hand movement alone, the contribution of motor movement versus 'linguistic' processes to the strength of hemispheric lateralization during sign production remains unclear. Here we directly contrast lateralization strength of covert versus overt signing during phonological and semantic fluency tasks. To address the possibility that hearing native signers' elevated lateralization indices (LIs) were due to performing a task in their less dominant language, here we test deaf native signers, whose dominant language is British Sign Language (BSL). Signers were more strongly left lateralized for overt than covert sign generation. However, the strength of lateralization was not correlated with the amount of time producing movements of the right hand. Comparisons with previous data from hearing native English speakers suggest stronger laterality indices for sign than speech in both covert and overt tasks. This increased left lateralization may be driven by specific properties of sign production such as the increased use of self-monitoring mechanisms or the nature of phonological encoding of signs.

  12. Visualizing Patient Journals by Combining Vital Signs Monitoring and Natural Language Processing

    DEFF Research Database (Denmark)

    Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten

    2016-01-01

    This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....

  13. The Differences between Chinese and American Language and Culture and Its Implications for College Language Teaching and Learning

    Institute of Scientific and Technical Information of China (English)

    王斌花

    2013-01-01

    Chinese and American language and culture differ from each other in five ways as Hypotactic language vs. Paratactic language, Analytical thinking vs. Synthetic thinking, Direct thinking vs. Indirect thinking, Individualism vs. Collectivism and Eth-ics-based vs. Legislation-based. Their implications for college language teaching and learning are worth our attention.

  14. Language and modality: Effects of the use of space in the agreement system of lengua de signos española (Spanish Sign Language)

    NARCIS (Netherlands)

    Costello, B.D.N.

    2016-01-01

    This thesis examines agreement in Spanish Sign Language (lengua de signos española - LSE) and provides a comprehensive description of the agreement mechanisms available to the language based on data collected from LSE signers from the Basque Country. This description makes it possible to compare agr

  15. Establishment and Study of Sign Language Video Library%手语视频库的建立与研究

    Institute of Scientific and Technical Information of China (English)

    杨炼; 钟鹏; 郑祖明; 韩梅; 李凯

    2015-01-01

    Based on the computer professional sign language, and combined with video clips and database technology, Sign language video database provides data support for the study on the standardization of computer professional sign language teaching;Taking“the computer professional sign language"as the foundation edited by CDPF, computer professional sign language video database is established.The subject of the paper is to provide the data support for the standardization of sign language teaching in computer science.Further the research of the sign language corpus can provide the support for the vid-eo data.%手语视频数据库以《计算机专业手语》为基础,将视频剪辑技术和数据库技术相结合,为计算机专业手语教学的规范化研究提供数据支持;以中残联编著的《计算机专业手语》为基础,建立计算机专业手语视频数据库。课题将视频剪辑技术和数据库技术相结合,为计算机专业手语教学的规范化研究提供数据支持;指导聋人高等工科教育中的手语教学,提高课堂手语教学的教学质量。同时可为计算机手语语料库的研究提供视频数据的支持。

  16. Recognition of sign language with an inertial sensor-based data glove.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam

    2015-01-01

    Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.

  17. Synthesis of image sequences for Korean sign language using 3D shape model

    Science.gov (United States)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  18. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  19. Native American Language Education as Policy-in-Practice: An Interpretative Policy Analysis of the Native American Languages Act of 1990/1992

    Science.gov (United States)

    Warhol, Larisa

    2011-01-01

    This paper reports on findings from an interpretive policy analysis of the development and impacts of landmark federal legislation in support of Native American languages: the 1990/1992 Native American Languages Act (NALA). Overturning more than two centuries of federal Indian policy, NALA established the federal role in preserving and protecting…

  20. A common neural system is activated in hearing non-signers to process French sign language and spoken French.

    Science.gov (United States)

    Courtin, Cyril; Jobard, Gael; Vigneau, Mathieu; Beaucousin, Virginie; Razafimandimby, Annick; Hervé, Pierre-Yves; Mellet, Emmanuel; Zago, Laure; Petit, Laurent; Mazoyer, Bernard; Tzourio-Mazoyer, Nathalie

    2011-01-15

    We used functional magnetic resonance imaging to investigate the areas activated by signed narratives in non-signing subjects naïve to sign language (SL) and compared it to the activation obtained when hearing speech in their mother tongue. A subset of left hemisphere (LH) language areas activated when participants watched an audio-visual narrative in their mother tongue was activated when they observed a signed narrative. The inferior frontal (IFG) and precentral (Prec) gyri, the posterior parts of the planum temporale (pPT) and of the superior temporal sulcus (pSTS), and the occipito-temporal junction (OTJ) were activated by both languages. The activity of these regions was not related to the presence of communicative intent because no such changes were observed when the non-signers watched a muted video of a spoken narrative. Recruitment was also not triggered by the linguistic structure of SL, because the areas, except pPT, were not activated when subjects listened to an unknown spoken language. The comparison of brain reactivity for spoken and sign languages shows that SL has a special status in the brain compared to speech; in contrast to unknown oral language, the neural correlates of SL overlap LH speech comprehension areas in non-signers. These results support the idea that strong relationships exist between areas involved in human action observation and language, suggesting that the observation of hand gestures have shaped the lexico-semantic language areas as proposed by the motor theory of speech. As a whole, the present results support the theory of a gestural origin of language.