WorldWideScience

Sample records for technology sign language

  1. [Information technology in learning sign language].

    Science.gov (United States)

    Hernández, Cesar; Pulido, Jose L; Arias, Jorge E

    2015-01-01

    To develop a technological tool that improves the initial learning of sign language in hearing impaired children. The development of this research was conducted in three phases: the lifting of requirements, design and development of the proposed device, and validation and evaluation device. Through the use of information technology and with the advice of special education professionals, we were able to develop an electronic device that facilitates the learning of sign language in deaf children. This is formed mainly by a graphic touch screen, a voice synthesizer, and a voice recognition system. Validation was performed with the deaf children in the Filadelfia School of the city of Bogotá. A learning methodology was established that improves learning times through a small, portable, lightweight, and educational technological prototype. Tests showed the effectiveness of this prototype, achieving a 32 % reduction in the initial learning time for sign language in deaf children.

  2. The Effect of New Technologies on Sign Language Research

    Science.gov (United States)

    Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam

    2013-01-01

    This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…

  3. Technology to Support Sign Language for Students with Disabilities

    Science.gov (United States)

    Donne, Vicki

    2013-01-01

    This systematic review of the literature provides a synthesis of research on the use of technology to support sign language. Background research on the use of sign language with students who are deaf/hard of hearing and students with low incidence disabilities, such as autism, intellectual disability, or communication disorders is provided. The…

  4. Signed languages and globalization

    NARCIS (Netherlands)

    Hiddinga, A.; Crasborn, O.

    2011-01-01

    Deaf people who form part of a Deaf community communicate using a shared sign language. When meeting people from another language community, they can fall back on a flexible and highly context-dependent form of communication called international sign, in which shared elements from their own sign

  5. Standardization of Sign Languages

    Science.gov (United States)

    Adam, Robert

    2015-01-01

    Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…

  6. Use of Information and Communication Technologies in Sign Language Test Development: Results of an International Survey

    Science.gov (United States)

    Haug, Tobias

    2015-01-01

    Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…

  7. Name signs in Danish Sign Language

    DEFF Research Database (Denmark)

    Bakken Jepsen, Julie

    2018-01-01

    in spoken languages, where a person working as a blacksmith by his friends might be referred to as ‘The Blacksmith’ (‘Here comes the Blacksmith!’) instead of using the person’s first name. Name signs are found not only in Danish Sign Language (DSL) but in most, if not all, sign languages studied to date....... This article provides examples of the creativity of the users of Danish Sign Language, including some of the processes in the use of metaphors, visual motivation and influence from Danish when name signs are created.......A name sign is a personal sign assigned to deaf, hearing impaired and hearing persons who enter the deaf community. The mouth action accompanying the sign reproduces all or part of the formal first name that the person has received by baptism or naming. Name signs can be compared to nicknames...

  8. Sign language: an international handbook

    NARCIS (Netherlands)

    Pfau, R.; Steinbach, M.; Woll, B.

    2012-01-01

    Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of

  9. Sign Language Web Pages

    Science.gov (United States)

    Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.

    2006-01-01

    The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…

  10. Flemish Sign Language Standardisation

    Science.gov (United States)

    Van Herreweghe, Mieke; Vermeerbergen, Myriam

    2009-01-01

    In 1997, the Flemish Deaf community officially rejected standardisation of Flemish Sign Language. It was a bold choice, which at the time was not in line with some of the decisions taken in the neighbouring countries. In this article, we shall discuss the choices the Flemish Deaf community has made in this respect and explore why the Flemish Deaf…

  11. Malaysian sign language dataset for automatic sign language ...

    African Journals Online (AJOL)

    Journal of Fundamental and Applied Sciences. Journal Home · ABOUT ... SL recognition system based on the Malaysian Sign Language (MSL). Implementation results are described. Keywords: sign language; pattern classification; database.

  12. Sign language typology: The contribution of rural sign languages

    NARCIS (Netherlands)

    de Vos, C.; Pfau, R.

    2015-01-01

    Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western

  13. Sign language comprehension: the case of Spanish sign language.

    Science.gov (United States)

    Rodríguez Ortiz, I R

    2008-01-01

    This study aims to answer the question, how much of Spanish Sign Language interpreting deaf individuals really understand. Study sampling included 36 deaf people (deafness ranging from severe to profound; variety depending on the age at which they learned sign language) and 36 hearing people who had good knowledge of sign language (most were interpreters). Sign language comprehension was assessed using passages of secondary level. After being exposed to the passages, the participants had to tell what they had understood about them, answer a set of related questions, and offer a title for the passage. Sign language comprehension by deaf participants was quite acceptable but not as good as that by hearing signers who, unlike deaf participants, were not only late learners of sign language as a second language but had also learned it through formal training.

  14. Sign language perception research for improving automatic sign language recognition

    NARCIS (Netherlands)

    Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.

    2009-01-01

    Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of

  15. Inuit Sign Language: a contribution to sign language typology

    NARCIS (Netherlands)

    Schuit, J.; Baker, A.; Pfau, R.

    2011-01-01

    Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different

  16. Planning Sign Languages: Promoting Hearing Hegemony? Conceptualizing Sign Language Standardization

    Science.gov (United States)

    Eichmann, Hanna

    2009-01-01

    In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…

  17. Sociolinguistic Typology and Sign Languages

    OpenAIRE

    Adam Schembri; Jordan Fenlon; Kearsy Cormier; Trevor Johnston

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflec...

  18. Kinship in Mongolian Sign Language

    Science.gov (United States)

    Geer, Leah

    2011-01-01

    Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…

  19. Sociolinguistic Typology and Sign Languages.

    Science.gov (United States)

    Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological 'complexity' and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological 'complexification'), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.

  20. Sociolinguistic Typology and Sign Languages

    Science.gov (United States)

    Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor

    2018-01-01

    This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored. PMID:29515506

  1. Sociolinguistic Typology and Sign Languages

    Directory of Open Access Journals (Sweden)

    Adam Schembri

    2018-02-01

    Full Text Available This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011, applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’, the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005; in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.

  2. Issues in Sign Language Lexicography

    DEFF Research Database (Denmark)

    Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2013-01-01

    ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...

  3. The Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    The entries of the The Danish Sign Language Dictionary have four sections:  Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons.  Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through:  Handshape: Particular handshapes for the active and the passive hand can be specified...... to find signs that are not themselves lemmas in the dictionary, but appear in example sentences.  Topic: Topics can be chosen as search criteria from a list of 70 topics....

  4. Quantifiers in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.; Paperno, D.; Keenan, E.L.

    2017-01-01

    After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,

  5. American Sign Language

    Science.gov (United States)

    ... combined with facial expressions and postures of the body. It is the primary language of many North Americans who are deaf and ... their eyebrows, widening their eyes, and tilting their bodies forward. Just as with other languages, specific ways of expressing ideas in ASL vary ...

  6. Sign Languages of the World

    DEFF Research Database (Denmark)

    This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...

  7. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  8. Sign language for the information society: an ICT roadmap for South African Sign Language

    CSIR Research Space (South Africa)

    Olivrin, G

    2008-11-01

    Full Text Available of work made in SASL. There is currently no collection of the cultural and linguistic heritage of SASL. Public signage and localisation: Provision for SASL-specifi c sign names of places, people, companies and brands, as well as the localisation... upgrading the aging data and voice infrastructures for visual grade technologies, new usages of technologies will emerge in public signage and communications, in advertising and for visual languages such as SASL. Research and development in Sign Language...

  9. ALPHABET SIGN LANGUAGE RECOGNITION USING LEAP MOTION TECHNOLOGY AND RULE BASED BACKPROPAGATION-GENETIC ALGORITHM NEURAL NETWORK (RBBPGANN

    Directory of Open Access Journals (Sweden)

    Wijayanti Nurul Khotimah

    2017-01-01

    Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.

  10. A Sign Language Screen Reader for Deaf

    Science.gov (United States)

    El Ghoul, Oussama; Jemni, Mohamed

    Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.

  11. Compiling a Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2010-01-01

    As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... – or are they homonyms?" and so on. Very often such questions demand further research and can't be answered sufficiently through a simple standard formula. Therefore lexicographic work often seems like an endless series of compromises. Another source of compromise arises when you set out to decide which information...... this dilemma, as we see DTS learners and teachers as well as native DTS signers as our target users. In the following we will focus on four problem areas with particular relevance for the sign language lexicographer: Sign representation Spoken languague equivalents and mouth movements Example sentences Partial...

  12. Numeral Incorporation in Japanese Sign Language

    Science.gov (United States)

    Ktejik, Mish

    2013-01-01

    This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…

  13. The Legal Recognition of Sign Languages

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…

  14. On the System of Person-Denoting Signs in Estonian Sign Language: Estonian Name Signs

    Science.gov (United States)

    Paales, Liina

    2010-01-01

    This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…

  15. Dictionaries of African Sign Languages: An Overview

    Science.gov (United States)

    Schmaling, Constanze H.

    2012-01-01

    This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…

  16. Eye Gaze in Creative Sign Language

    Science.gov (United States)

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  17. Language Policy and Planning: The Case of Italian Sign Language

    Science.gov (United States)

    Geraci, Carlo

    2012-01-01

    Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…

  18. Awareness of Deaf Sign Language and Gang Signs.

    Science.gov (United States)

    Smith, Cynthia; Morgan, Robert L.

    There have been increasing incidents of innocent people who use American Sign Language (ASL) or another form of sign language being victimized by gang violence due to misinterpretation of ASL hand formations. ASL is familiar to learners with a variety of disabilities, particularly those in the deaf community. The problem is that gang members have…

  19. Automatic sign language recognition inspired by human sign perception

    NARCIS (Netherlands)

    Ten Holt, G.A.

    2010-01-01

    Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors,

  20. Signs of the arctic: Typological aspects of Inuit Sign Language

    NARCIS (Netherlands)

    Schuit, J.M.

    2014-01-01

    In this thesis, the native sign language used by deaf Inuit people is described. Inuit Sign Language (IUR) is used by less than 40 people as their sole means of communication, and is therefore highly endangered. Apart from the description of IUR as such, an additional goal is to contribute to the

  1. Sign Lowering and Phonetic Reduction in American Sign Language.

    Science.gov (United States)

    Tyrone, Martha E; Mauk, Claude E

    2010-04-01

    This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.

  2. Information structure in Russian Sign Language and Sign Language of the Netherlands

    NARCIS (Netherlands)

    Kimmelman, V.

    2014-01-01

    This dissertation explores Information Structure in two sign languages: Sign Language of the Netherlands and Russian Sign Language. Based on corpus data and elicitation tasks we show how topic and focus are expressed in these languages. In particular, we show that topics can be marked syntactically

  3. An electronic dictionary of Danish Sign Language

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Troelsgård, Thomas

    2008-01-01

    Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....

  4. Visual cortex entrains to sign language.

    Science.gov (United States)

    Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel

    2017-06-13

    Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.

  5. Signed Language Working Memory Capacity of Signed Language Interpreters and Deaf Signers

    Science.gov (United States)

    Wang, Jihong; Napier, Jemina

    2013-01-01

    This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…

  6. A tour in sign language

    CERN Document Server

    François Briard

    2016-01-01

    In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration.   On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...

  7. Historical Development of Hong Kong Sign Language

    Science.gov (United States)

    Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny

    2013-01-01

    This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…

  8. Research Ethics in Sign Language Communities

    Science.gov (United States)

    Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.

    2009-01-01

    Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…

  9. Phonological Awareness for American Sign Language

    Science.gov (United States)

    Corina, David P.; Hafer, Sarah; Welch, Kearnan

    2014-01-01

    This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…

  10. Phonological Similarity in American Sign Language.

    Science.gov (United States)

    Hildebrandt, Ursula; Corina, David

    2002-01-01

    Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)

  11. LSE-Sign: A lexical database for Spanish Sign Language.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel

    2016-03-01

    The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.

  12. Spoken Language Activation Alters Subsequent Sign Language Activation in L2 Learners of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2017-01-01

    A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…

  13. Syntactic priming in American Sign Language.

    Science.gov (United States)

    Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I

    2015-01-01

    Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.

  14. Syntactic priming in American Sign Language.

    Directory of Open Access Journals (Sweden)

    Matthew L Hall

    Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.

  15. Adaptation of a Vocabulary Test from British Sign Language to American Sign Language

    Science.gov (United States)

    Mann, Wolfgang; Roy, Penny; Morgan, Gary

    2016-01-01

    This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…

  16. Approaching Sign Language Test Construction: Adaptation of the German Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias

    2011-01-01

    There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…

  17. Repetitions in French Belgian Sign Language (LSFB) and Flemish Sign Language (VGT) narratives and conversations

    OpenAIRE

    Notarrigo, Ingrid; Meurant, Laurence; Van Herreweghe, Mieke; Vermeerbergen, Myriam

    2016-01-01

    Repetition was described in the nineties by a limited number of sign linguists: Vermeerbergen & De Vriendt (1994) looked at a small corpus of VGT data, Fisher & Janis (1990) analysed “verb sandwiches” in ASL and Pinsonneault (1994) “verb echos” in Quebec Sign Language. More recently the same phenomenon has been the focus of research in a growing number of signed languages, including American (Nunes and de Quadros 2008), Hong Kong (Sze 2008), Russian (Shamaro 2008), Polish (Flilipczak and Most...

  18. What sign language creation teaches us about language.

    Science.gov (United States)

    Brentari, Diane; Coppola, Marie

    2013-03-01

    How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Language Policies in Uruguay and Uruguayan Sign Language (LSU)

    Science.gov (United States)

    Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso

    2012-01-01

    In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…

  20. The Sign Language Situation in Mali

    Science.gov (United States)

    Nyst, Victoria

    2015-01-01

    This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…

  1. Word Order in Russian Sign Language

    Science.gov (United States)

    Kimmelman, Vadim

    2012-01-01

    In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…

  2. Structural borrowing: The case of Kenyan Sign Language (KSL) and ...

    African Journals Online (AJOL)

    Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...

  3. Segmentation of British Sign Language (BSL): Mind the gap!

    OpenAIRE

    Orfanidou, E.; McQueen, J.; Adam, R.; Morgan, G.

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were prec...

  4. Journal of Language, Technology & Entrepreneurship in Africa

    African Journals Online (AJOL)

    Vol 9, No 1 (2018): Journal of Language, Technology & Entrepreneurship in Africa ... TANZANIA: AN ACCOUNT OF THE LANGUAGE OF BILLBOARDS AND SHOP-SIGNS IN DISTRICT HEADQUARTERS ... AJOL African Journals Online.

  5. A Stronger Reason for the Right to Sign Languages

    Science.gov (United States)

    Trovato, Sara

    2013-01-01

    Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…

  6. The sign language skills classroom observation: a process for describing sign language proficiency in classroom settings.

    Science.gov (United States)

    Reeves, J B; Newell, W; Holcomb, B R; Stinson, M

    2000-10-01

    In collaboration with teachers and students at the National Technical Institute for the Deaf (NTID), the Sign Language Skills Classroom Observation (SLSCO) was designed to provide feedback to teachers on their sign language communication skills in the classroom. In the present article, the impetus and rationale for development of the SLSCO is discussed. Previous studies related to classroom signing and observation methodology are reviewed. The procedure for developing the SLSCO is then described. This procedure included (a) interviews with faculty and students at NTID, (b) identification of linguistic features of sign language important for conveying content to deaf students, (c) development of forms for recording observations of classroom signing, (d) analysis of use of the forms, (e) development of a protocol for conducting the SLSCO, and (f) piloting of the SLSCO in classrooms. The results of use of the SLSCO with NTID faculty during a trial year are summarized.

  7. DIFFERENCES BETWEEN AMERICAN SIGN LANGUAGE (ASL AND BRITISH SIGN LANGUAGE (BSL

    Directory of Open Access Journals (Sweden)

    Zora JACHOVA

    2008-06-01

    Full Text Available In the communication of deaf people between them­selves and hearing people there are three ba­sic as­pects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The ges­ture and the move­ments pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them pho­nemes. He created three main phoneme catego­ries: hand position, location and movement.Sign languages as spoken languages have back­ground from the distant past. They developed par­allel with the development of spoken language and undertook many historical changes. Therefore, to­day they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.

  8. Sign Language Recognition using Neural Networks

    Directory of Open Access Journals (Sweden)

    Sabaheta Djogic

    2014-11-01

    Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.

  9. South African sign language assistive translation

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2008-04-01

    Full Text Available , the fact that the target structure is SASL, the home language of the Deaf user, already facilitates the communication. Ul- timately the message will be delivered more naturally by a signing avatar [14]. We shall present further scenarios for future... Work 6.1 Disambiguation Disambiguation can be improved on two levels: firstly, by eliciting more or better information from the user through the AAC interface and secondly, by improving certain as- pects of the MT system. We discuss both...

  10. The emergence of temporal language in Nicaraguan Sign Language.

    Science.gov (United States)

    Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse

    2016-11-01

    Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Phonological reduplication in sign language: rules rule

    Directory of Open Access Journals (Sweden)

    Iris eBerent

    2014-06-01

    Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.

  12. Sign Language with Babies: What Difference Does It Make?

    Science.gov (United States)

    Barnes, Susan Kubic

    2010-01-01

    Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…

  13. Segmentation of British Sign Language (BSL): Mind the gap!

    NARCIS (Netherlands)

    Orfanidou, E.; McQueen, J.M.; Adam, R.; Morgan, G.

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous

  14. Adapting tests of sign language assessment for other sign languages--a review of linguistic, cultural, and psychometric problems.

    Science.gov (United States)

    Haug, Tobias; Mann, Wolfgang

    2008-01-01

    Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another.

  15. Sign Language Echolalia in Deaf Children with Autism Spectrum Disorder

    Science.gov (United States)

    Shield, Aaron; Cooley, Frances; Meier, Richard P.

    2017-01-01

    Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…

  16. Opposite cerebral dominance for reading and sign language

    OpenAIRE

    Komakula, Sirisha. T.; Burr, Robert. B.; Lee, James N.; Anderson, Jeffrey

    2010-01-01

    We present a case of right hemispheric dominance for sign language but left hemispheric dominance for reading, in a left-handed deaf patient with epilepsy and left mesial temporal sclerosis. Atypical language laterality for ASL was determined by preoperative fMRI, and congruent with ASL modified WADA testing. We conclude that reading and sign language can have crossed dominance and preoperative fMRI evaluation of deaf patients should include both reading and sign language evaluations.

  17. Sociolinguistic Variation and Change in British Sign Language Number Signs: Evidence of Leveling?

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas

    2015-01-01

    This article presents findings from the first major study to investigate lexical variation and change in British Sign Language (BSL) number signs. As part of the BSL Corpus Project, number sign variants were elicited from 249 deaf signers from eight sites throughout the UK. Age, school location, and language background were found to be significant…

  18. The Mechanics of Fingerspelling: Analyzing Ethiopian Sign Language

    Science.gov (United States)

    Duarte, Kyle

    2010-01-01

    Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…

  19. New Perspectives on the History of American Sign Language

    Science.gov (United States)

    Shaw, Emily; Delaporte, Yves

    2011-01-01

    Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…

  20. Validity of the American Sign Language Discrimination Test

    Science.gov (United States)

    Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.

    2016-01-01

    American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…

  1. Regional Sign Language Varieties in Contact: Investigating Patterns of Accommodation

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy

    2016-01-01

    Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…

  2. Linguistic Policies, Linguistic Planning, and Brazilian Sign Language in Brazil

    Science.gov (United States)

    de Quadros, Ronice Muller

    2012-01-01

    This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…

  3. Equity in Education: Signed Language and the Courts

    Science.gov (United States)

    Snoddon, Kristin

    2009-01-01

    This article examines several legal cases in Canada, the USA, and Australia involving signed language in education for Deaf students. In all three contexts, signed language rights for Deaf students have been viewed from within a disability legislation framework that either does not extend to recognizing language rights in education or that…

  4. Topics and topic prominence in two sign languages

    NARCIS (Netherlands)

    Kimmelman, V.

    2015-01-01

    In this paper we describe topic marking in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) and discuss whether these languages should be considered topic prominent. The formal markers of topics in RSL are sentence-initial position, a prosodic break following the topic, and

  5. Sentence Repetition in Deaf Children with Specific Language Impairment in British Sign Language

    Science.gov (United States)

    Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary

    2015-01-01

    Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…

  6. Linearization of weak hand holds in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2017-01-01

    Russian Sign Language (RSL) makes use of constructions involving manual simultaneity, in particular, weak hand holds, where one hand is being held in the location and configuration of a sign, while the other simultaneously produces one sign or a sequence of several signs. In this paper, I argue that

  7. The Influence of Deaf People's Dual Category Status on Sign Language Planning: The British Sign Language (Scotland) Act (2015)

    Science.gov (United States)

    De Meulder, Maartje

    2017-01-01

    Through the British Sign Language (Scotland) Act, British Sign Language (BSL) was given legal status in Scotland. The main motives for the Act were a desire to put BSL on a similar footing with Gaelic and the fact that in Scotland, BSL signers are the only group whose first language is not English who must rely on disability discrimination…

  8. Information and Signs: The Language of Images

    Directory of Open Access Journals (Sweden)

    Inna Semetsky

    2010-03-01

    Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.

  9. Dictionary of the Slovenian Sign Language on the WWW

    OpenAIRE

    Cempre, Luka; Bešir, Aleksander; Solina, Franc

    2013-01-01

    The article describes technical and user-interface issues of transferring the contents and functionality of the CD-ROM version of the Slovenian sing language dictionary to the web. The dictionary of Slovenian sign language consist of video clips showing the demonstra- tion of signs that deaf people use for communication, text description of the words corresponding to the signs and pictures illustrating the same word/sign. A new technical solution—a video sprite—for concatenating subsections o...

  10. Discourses of prejudice in the professions: the case of sign languages.

    Science.gov (United States)

    Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott

    2017-09-01

    There is no evidence that learning a natural human language is cognitively harmful to children. To the contrary, multilingualism has been argued to be beneficial to all. Nevertheless, many professionals advise the parents of deaf children that their children should not learn a sign language during their early years, despite strong evidence across many research disciplines that sign languages are natural human languages. Their recommendations are based on a combination of misperceptions about (1) the difficulty of learning a sign language, (2) the effects of bilingualism, and particularly bimodalism, (3) the bona fide status of languages that lack a written form, (4) the effects of a sign language on acquiring literacy, (5) the ability of technologies to address the needs of deaf children and (6) the effects that use of a sign language will have on family cohesion. We expose these misperceptions as based in prejudice and urge institutions involved in educating professionals concerned with the healthcare, raising and educating of deaf children to include appropriate information about first language acquisition and the importance of a sign language for deaf children. We further urge such professionals to advise the parents of deaf children properly, which means to strongly advise the introduction of a sign language as soon as hearing loss is detected. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. An Interpreter's Interpretation: Sign Language Interpreters' View of Musculoskeletal Disorders

    National Research Council Canada - National Science Library

    Johnson, William L

    2003-01-01

    Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...

  12. Sign Language and Spoken Language for Children With Hearing Loss: A Systematic Review.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung

    2016-01-01

    Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.

  13. Evaluating Effects of Language Recognition on Language Rights and the Vitality of New Zealand Sign Language

    Science.gov (United States)

    McKee, Rachel Locker; Manning, Victoria

    2015-01-01

    Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…

  14. On the Conventionalization of Mouth Actions in Australian Sign Language.

    Science.gov (United States)

    Johnston, Trevor; van Roekel, Jane; Schembri, Adam

    2016-03-01

    This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.

  15. Input Processing at First Exposure to a Sign Language

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…

  16. Constraints on Negative Prefixation in Polish Sign Language.

    Science.gov (United States)

    Tomaszewski, Piotr

    2015-01-01

    The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.

  17. Early Sign Language Exposure and Cochlear Implantation Benefits.

    Science.gov (United States)

    Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S

    2017-07-01

    Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.

  18. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language.

    Science.gov (United States)

    Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I

    2016-03-01

    One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. The road to language learning is iconic: evidence from British Sign Language.

    Science.gov (United States)

    Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella

    2012-12-01

    An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.

  20. Brain correlates of constituent structure in sign language comprehension.

    Science.gov (United States)

    Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe

    2018-02-15

    During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Signs of Resistance: Peer Learning of Sign Languages within "Oral" Schools for the Deaf

    Science.gov (United States)

    Anglin-Jaffe, Hannah

    2013-01-01

    This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…

  2. Poetry in South African Sign Language: What is different? | Baker ...

    African Journals Online (AJOL)

    Log in or Register to get access to full text downloads. ... Poetry in a sign language can make use of literary devices just as poetry in a ... This poem illustrates well the multi-layered meaning that can be created in sign language poetry through ...

  3. Italian Sign Language (LIS) Poetry: Iconic Properties and Structural Regularities.

    Science.gov (United States)

    Russo, Tommaso; Giuranna, Rosaria; Pizzuto, Elena

    2001-01-01

    Explores and describes from a crosslinguistic perspective, some of the major structural irregularities that characterize poetry in Italian Sign Language and distinguish poetic from nonpoetic texts. Reviews findings of previous studies of signed language poetry, and points out issues that need to be clarified to provide a more accurate description…

  4. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  5. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  6. The Birth and Rebirth of "Sign Language Studies"

    Science.gov (United States)

    Armstrong, David F.

    2012-01-01

    As most readers of this journal are aware, "Sign Language Studies" ("SLS") served for many years as effectively the only serious scholarly outlet for work in the nascent field of sign language linguistics. Now reaching its 40th anniversary, the journal was founded by William C. Stokoe and then edited by him for the first quarter century of its…

  7. Question-Answer Pairs in Sign Language of the Netherlands

    NARCIS (Netherlands)

    Kimmelman, V.; Vink, L.

    2017-01-01

    Several sign languages of the world utilize a construction that consists of a question followed by an answer, both of which are produced by the same signer. For American Sign Language, this construction has been analyzed as a discourse-level rhetorical question construction (Hoza et al. 1997), as a

  8. Introduction: Sign Language, Sustainable Development, and Equal Opportunities

    Science.gov (United States)

    De Clerck, Goedele A. M.

    2017-01-01

    This article has been excerpted from "Introduction: Sign Language, Sustainable Development, and Equal Opportunities" (De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck & P. V. Paul (Eds.) 2016). The idea of exploring various…

  9. Sign Language and Language Acquisition in Man and Ape. New Dimensions in Comparative Pedolinguistics.

    Science.gov (United States)

    Peng, Fred C. C., Ed.

    A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…

  10. "Hearing" the signs:influence of sign language in an inclusive classroom

    OpenAIRE

    Monney, M. (Mariette)

    2017-01-01

    Abstract Finding new methods to achieve the goals of Education For All is a constant worry for primary school teachers. Multisensory methods have been proved to be efficient in the past decades. Sign Language, being a visual and kinesthetic language, could become a future educational tool to fulfill the needs of a growing diversity of learners. This ethnographic study describes how Sign Language exposure in inclusive classr...

  11. Examination of Sign Language Education According to the Opinions of Members from a Basic Sign Language Certification Program

    Science.gov (United States)

    Akmese, Pelin Pistav

    2016-01-01

    Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…

  12. The Phonetics of Head and Body Movement in the Realization of American Sign Language Signs.

    Science.gov (United States)

    Tyrone, Martha E; Mauk, Claude E

    2016-01-01

    Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.

  13. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language

    Science.gov (United States)

    Williams, Joshua T.; Newman, Sharlene D.

    2016-01-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…

  14. Visual Sonority Modulates Infants' Attraction to Sign Language

    Science.gov (United States)

    Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain

    2018-01-01

    The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…

  15. Observations on Word Order in Saudi Arabian Sign Language

    Science.gov (United States)

    Sprenger, Kristen; Mathur, Gaurav

    2012-01-01

    This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…

  16. Australian Aboriginal Deaf People and Aboriginal Sign Language

    Science.gov (United States)

    Power, Des

    2013-01-01

    Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…

  17. Technologies for Language Assessment.

    Science.gov (United States)

    Burstein, Jill; And Others

    1996-01-01

    Reviews current and developing technology uses that are relevant to language assessment and discusses examples of recent linguistic applications from the laboratory at the Educational Testing Service. The processes of language test development are described and the functions they serve from the perspective of a large testing organization are…

  18. The role of syllables in sign language production.

    Science.gov (United States)

    Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel

    2014-01-01

    The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.

  19. About using serious games to teach (Portuguese) sign language

    OpenAIRE

    Gameiro, João Manuel Ferreira

    2014-01-01

    Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar soluti...

  20. Legal Pathways to the Recognition of Sign Languages: A Comparison of the Catalan and Spanish Sign Language Acts

    Science.gov (United States)

    Quer, Josep

    2012-01-01

    Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…

  1. The translation of biblical texts into South African Sign Language ...

    African Journals Online (AJOL)

    The translation of biblical texts into South African Sign Language. ... Native signers were used as translators with the assistance of hearing specialists in the fields of religion and translation studies. ... AJOL African Journals Online. HOW TO ...

  2. Making an Online Dictionary of New Zealand Sign Language ...

    African Journals Online (AJOL)

    ... is n example of a contemporary sign language dictionary that leverages the 21st ... informed development of this bilingual, bi-directional, multimedia dictionary. ... and dealing with sociolinguistic variation in the selection and performance of ...

  3. Basic Color Terms in Estonian Sign Language

    Science.gov (United States)

    Hollman, Liivi; Sutrop, Urmas

    2011-01-01

    The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…

  4. Segmentation of British Sign Language (BSL): mind the gap!

    Science.gov (United States)

    Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary

    2015-01-01

    This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.

  5. Lexical access in sign language: a computational model.

    Science.gov (United States)

    Caselli, Naomi K; Cohen-Goldberg, Ariel M

    2014-01-01

    PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  6. Lexical access in sign language: A computational model

    Directory of Open Access Journals (Sweden)

    Naomi Kenney Caselli

    2014-05-01

    Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.

  7. Indonesian Sign Language Number Recognition using SIFT Algorithm

    Science.gov (United States)

    Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.

    2018-04-01

    Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.

  8. The Use of Sign Language Pronouns by Native-Signing Children with Autism

    Science.gov (United States)

    Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen

    2015-01-01

    We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…

  9. Selected Lexical Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette

    2012-01-01

    This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…

  10. An Intelligent Computer-Based System for Sign Language Tutoring

    Science.gov (United States)

    Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy

    2012-01-01

    A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…

  11. Poetry in South African Sign Language: What is different?

    African Journals Online (AJOL)

    Mary Theresa Biberauer

    The study of literary expression in sign languages has increased over the last twenty .... extensively to express emotion on the part of a character in the narrative. ... township in her non-manual facial expressions while signing manually what is ...

  12. The morphosyntax of verbs of motion in serial constructions: a crosslinguistic study in three signed languages

    NARCIS (Netherlands)

    Benedicto, E.; Cvejanov, S.; Quer, J.; Quer, J.F.

    2008-01-01

    This paper provides a comparative analysis of the structural properties of serial verb constructions (SVC) in three sign languages: LSA (Lengua de Señas Argentina, Argentinean Sign Language), LSC (Llengua de Signes Catalana, Catalan Sign Language) and ASL (American Sign Language). The paper presents

  13. A study of syllable codas in South African Sign Language

    African Journals Online (AJOL)

    Kate H

    A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...

  14. Ideologies and Attitudes toward Sign Languages: An Approximation

    Science.gov (United States)

    Krausneker, Verena

    2015-01-01

    Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.

  15. Sign Language Planning in the Netherlands between 1980 and 2010

    Science.gov (United States)

    Schermer, Trude

    2012-01-01

    This article discusses several aspects of language planning with respect to Sign Language of the Netherlands, or Nederlandse Gebarentaal (NGT). For nearly thirty years members of the Deaf community, the Dutch Deaf Council (Dovenschap) have been working together with researchers, several organizations in deaf education, and the organization of…

  16. ASL-LEX: A lexical database of American Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen

    2017-04-01

    ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .

  17. Recognition of Indian Sign Language in Live Video

    Science.gov (United States)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  18. Word order in Russian Sign Language

    NARCIS (Netherlands)

    Kimmelman, V.

    2012-01-01

    The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple

  19. The effects of sign language on spoken language acquisition in children with hearing loss: a systematic review protocol.

    Science.gov (United States)

    Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David

    2013-12-06

    Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity

  20. On the System of Place Name Signs in Estonian Sign Language

    Directory of Open Access Journals (Sweden)

    Liina Paales

    2011-05-01

    Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.

  1. Training Literacy Skills through Sign Language

    Science.gov (United States)

    Rudner, Mary; Andin, Josefine; Rönnberg, Jerker; Heimann, Mikael; Hermansson, Anders; Nelson, Keith; Tjus, Tomas

    2015-01-01

    The literacy skills of deaf children generally lag behind those of their hearing peers. The mechanisms of reading in deaf individuals are only just beginning to be unraveled but it seems that native language skills play an important role. In this study 12 deaf pupils (six in grades 1-2 and six in grades 4-6) at a Swedish state primary school for…

  2. FORMS OF HAND IN SIGN LANGUAGE IN BOSNIA AND HERZEGOVINA

    Directory of Open Access Journals (Sweden)

    Husnija Hasanbegović

    2013-05-01

    Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.

  3. Technology in Language Use, Language Teaching, and Language Learning

    Science.gov (United States)

    Chun, Dorothy; Smith, Bryan; Kern, Richard

    2016-01-01

    This article offers a capacious view of technology to suggest broad principles relating technology and language use, language teaching, and language learning. The first part of the article considers some of the ways that technological media influence contexts and forms of expression and communication. In the second part, a set of heuristic…

  4. Neural Basis of Action Understanding: Evidence from Sign Language Aphasia.

    Science.gov (United States)

    Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory

    2013-01-01

    The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.

  5. Facilitating Exposure to Sign Languages of the World: The Case for Mobile Assisted Language Learning

    Science.gov (United States)

    Parton, Becky Sue

    2014-01-01

    Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…

  6. Pointing and Reference in Sign Language and Spoken Language: Anchoring vs. Identifying

    Science.gov (United States)

    Barberà, Gemma; Zwets, Martine

    2013-01-01

    In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…

  7. Language policies and sign language translation and interpreting: connections between Brazil and Mozambique

    Directory of Open Access Journals (Sweden)

    Silvana Aguiar dos Santos

    2015-12-01

    Full Text Available http://dx.doi.org/10.5007/1984-8420.2015v16n2p101 This paper is the result of an initial attempt to establish a connection between Brazil and Mozambique regarding sign language translation and interpreting. It reviews some important landmarks in language policies aimed at sign languages in these countries and discusses how certain actions directly impact political decisions related to sign lan­guage translation and interpreting. In this context, two lines of argument are developed. The first one addresses the role of sign language translation and interpreting in the Por­tuguese-speaking context, since Portuguese is the official language in both countries; the other offers some reflections about the Deaf movements and the movements of sign lan­guage translators and interpreters, the legal recognition of sign languages, the develop­ment of undergraduate courses and the contemporary challenges in the work of transla­tion professionals. Finally, it is suggested that sign language translators and interpreters in both Brazil and Mozambique undertake efforts to press government bodies to invest in: (i area-specific training for translators and interpreters, (ii qualification of the ser­vices provided by such professionals, and (iii development of human resources at mas­ter’s and doctoral levels in order to strengthen research on sign language translation and interpreting in the Community of Portuguese-Speaking Countries.

  8. Imitation, Sign Language Skill and the Developmental Ease of Language Understanding (D-ELU) Model.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into

  9. Imitation, sign language skill and the Developmental Ease of Language Understanding (D-ELU model

    Directory of Open Access Journals (Sweden)

    Emil eHolmer

    2016-02-01

    Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills

  10. Information Transfer Capacity of Articulators in American Sign Language.

    Science.gov (United States)

    Malaia, Evie; Borneman, Joshua D; Wilbur, Ronnie B

    2018-03-01

    The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.

  11. Sign language processing and the mirror neuron system.

    Science.gov (United States)

    Corina, David P; Knapp, Heather

    2006-05-01

    In this paper we review evidence for frontal and parietal lobe involvement in sign language comprehension and production, and evaluate the extent to which these data can be interpreted within the context of a mirror neuron system for human action observation and execution. We present data from three literatures--aphasia, cortical stimulation, and functional neuroimaging. Generally, we find support for the idea that sign language comprehension and production can be viewed in the context of a broadly-construed frontal-parietal human action observation/execution system. However, sign language data cannot be fully accounted for under a strict interpretation of the mirror neuron system. Additionally, we raise a number of issues concerning the lack of specificity in current accounts of the human action observation/execution system.

  12. On the temporal dynamics of sign production: An ERP study in Catalan Sign Language (LSC).

    Science.gov (United States)

    Baus, Cristina; Costa, Albert

    2015-06-03

    This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Identifying Overlapping Language Communities: The Case of Chiriquí and Panamanian Signed Languages

    Science.gov (United States)

    Parks, Elizabeth S.

    2016-01-01

    In this paper, I use a holographic metaphor to explain the identification of overlapping sign language communities in Panama. By visualizing Panama's complex signing communities as emitting community "hotspots" through social drama on multiple stages, I employ ethnographic methods to explore overlapping contours of Panama's sign language…

  14. Why Doesn't Everyone Here Speak Sign Language? Questions of Language Policy, Ideology and Economics

    Science.gov (United States)

    Rayman, Jennifer

    2009-01-01

    This paper is a thought experiment exploring the possibility of establishing universal bilingualism in Sign Languages. Focusing in the first part on historical examples of inclusive signing societies such as Martha's Vineyard, the author suggests that it is not possible to create such naturally occurring practices of Sign Bilingualism in societies…

  15. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.

  16. Bi-channel Sensor Fusion for Automatic Sign Language Recognition

    DEFF Research Database (Denmark)

    Kim, Jonghwa; Wagner, Johannes; Rehm, Matthias

    2008-01-01

    In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel senso......-independent condition, where subjective differences do not allow for high recognition rates. Finally we discuss a problem of feature-level fusion caused by high disparity between accuracies of each single channel classification....

  17. Sign language in dental education-A new nexus.

    Science.gov (United States)

    Jones, T; Cumberbatch, K

    2017-08-14

    The introduction of the landmark mandatory teaching of sign language to undergraduate dental students at the University of the West Indies (UWI), Mona Campus in Kingston, Jamaica, to bridge the communication gap between dentists and their patients is reviewed. A review of over 90 Doctor of Dental Surgery and Doctor of Dental Medicine curricula in North America, the United Kingdom, parts of Europe and Australia showed no inclusion of sign language in those curricula as a mandatory component. In Jamaica, the government's training school for dental auxiliaries served as the forerunner to the UWI's introduction of formal training of sign language in 2012. Outside of the UWI, a couple of dental schools have sign language courses, but none have a mandatory programme as the one at the UWI. Dentists the world over have had to rely on interpreters to sign with their deaf patients. The deaf in Jamaica have not appreciated the fact that dentists cannot sign and they have felt insulted and only go to the dentist in emergency situations. The mandatory inclusion of sign language in the Undergraduate Dental Programme curriculum at The University of the West Indies, Mona Campus, sought to establish a direct communication channel to formally bridge this gap. The programme of two sign language courses and a direct clinical competency requirement was developed during the second year of the first cohort of the newly introduced undergraduate dental programme through a collaborating partnership between two faculties on the Mona Campus. The programme was introduced in 2012 in the third year of the 5-year undergraduate dental programme. To date, two cohorts have completed the programme, and the preliminary findings from an ongoing clinical study have shown a positive impact on dental care access and dental treatment for deaf patients at the UWI Mona Dental Polyclinic. The development of a direct communication channel between dental students and the deaf that has led to increased dental

  18. Deficits in narrative abilities in child British Sign Language users with specific language impairment.

    Science.gov (United States)

    Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary

    2014-01-01

    This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.

  19. Languages Are More than Words: Spanish and American Sign Language in Early Childhood Settings

    Science.gov (United States)

    Sherman, Judy; Torres-Crespo, Marisel N.

    2015-01-01

    Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.

  20. Teaching and Learning Sign Language as a “Foreign” Language ...

    African Journals Online (AJOL)

    In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...

  1. A human mirror neuron system for language: Perspectives from signed languages of the deaf.

    Science.gov (United States)

    Knapp, Heather Patterson; Corina, David P

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.

  2. The benefits of sign language for deaf learners with language challenges

    Directory of Open Access Journals (Sweden)

    Van Staden, Annalene

    2009-12-01

    Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.

  3. Impacts of Visual Sonority and Handshape Markedness on Second Language Learning of American Sign Language.

    Science.gov (United States)

    Williams, Joshua T; Newman, Sharlene D

    2016-04-01

    The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. THE BENEFIT OF EARLY EXPOSURE TO SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Ljubica PRIBANIKJ

    2009-11-01

    Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.

  5. Sign Language Legislation as a Tool for Sustainability

    Science.gov (United States)

    Pabsch, Annika

    2017-01-01

    This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.

  6. On Selected Phonological Patterns in Saudi Arabian Sign Language

    Science.gov (United States)

    Tomita, Nozomi; Kozak, Viola

    2012-01-01

    This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…

  7. Achieving mutual understanding in Argentine Sign Language (LSA)

    NARCIS (Netherlands)

    Manrique Cordeje, M.E.

    2017-01-01

    How does (mis)understanding works in conversation? Problems of understanding occur all the time in our everyday social life. How does miscommunication happen and how do we deal with it? This thesis reports on how sign language users manage to understand each other based on a large Conversational

  8. Gesture and Signing in Support of Expressive Language Development

    Science.gov (United States)

    Baker-Ramos, Leslie K.

    2017-01-01

    The purpose of this teacher inquiry is to explore the effects of signing and gesturing on the expressive language development of non-verbal children. The first phase of my inquiry begins with the observations of several non-verbal students with various etiologies in three different educational settings. The focus of these observations is to…

  9. Face Recognition Is Shaped by the Use of Sign Language

    Science.gov (United States)

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  10. Space and iconicity in German Sign Language (DGS)

    NARCIS (Netherlands)

    Perniss, P.M.

    2007-01-01

    This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and

  11. Sign language interpreting education : Reflections on interpersonal skills

    NARCIS (Netherlands)

    Hammer, A.; van den Bogaerde, B.; Cirillo, L.; Niemants, N.

    2017-01-01

    We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based predominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the

  12. Sign language interpreting education : Reflections on interpersonal skills

    NARCIS (Netherlands)

    Annemiek Hammer; Dr. Beppie van den Bogaerde

    2017-01-01

    We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based pre-dominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the

  13. Sign language indexation within the MPEG-7 framework

    Science.gov (United States)

    Zaharia, Titus; Preda, Marius; Preteux, Francoise J.

    1999-06-01

    In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.

  14. Social construction of American sign language--English interpreters.

    Science.gov (United States)

    McDermid, Campbell

    2009-01-01

    Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English interpreters, such as learners' age and education and the characteristics of good citizens within the Deaf community. According to the participants, younger students were adept at language acquisition, whereas older learners more readily understood the purpose of lessons. Children of deaf adults were seen as more culturally aware. The participants' beliefs echoed the theories of P. Freire (1970/1970) that educators consider the reality of each student and their praxis and were responsible for facilitating student self-awareness. Important characteristics in the social construction of students included independence, an appropriate attitude, an understanding of Deaf culture, ethical behavior, community involvement, and a willingness to pursue lifelong learning.

  15. Phonological Development in Hearing Learners of a Sign Language: The Influence of Phonological Parameters, Sign Complexity, and Iconicity

    Science.gov (United States)

    Ortega, Gerardo; Morgan, Gary

    2015-01-01

    The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…

  16. Computerized Sign Language-Based Literacy Training for Deaf and Hard-of-Hearing Children

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2017-01-01

    Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further,…

  17. Development of Geography and Geology Terminology in British Sign Language

    Science.gov (United States)

    Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel

    2016-04-01

    The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.

  18. Production and Comprehension of Prosodic Markers in Sign Language Imperatives

    Directory of Open Access Journals (Sweden)

    Diane Brentari

    2018-05-01

    Full Text Available In signed and spoken language sentences, imperative mood and the corresponding speech acts such as for instance, command, permission or advice, can be distinguished by morphosyntactic structures, but also solely by prosodic cues, which are the focus of this paper. These cues can express paralinguistic mental states or grammatical meaning, and we show that in American Sign Language (ASL, they also exhibit the function, scope, and alignment of prosodic, linguistic elements of sign languages. The production and comprehension of prosodic facial expressions and temporal patterns therefore can shed light on how cues are grammaticalized in sign languages. They can also be informative about the formal semantic and pragmatic properties of imperative types not only in ASL, but also more broadly. This paper includes three studies: one of production (Study 1 and two of comprehension (Studies 2 and 3. In Study 1, six prosodic cues are analyzed in production: temporal cues of sign and hold duration, and non-manual cues including tilts of the head, head nods, widening of the eyes, and presence of mouthings. Results of Study 1 show that neutral sentences and commands are well distinguished from each other and from other imperative speech acts via these prosodic cues alone; there is more limited differentiation among explanation, permission, and advice. The comprehension of these five speech acts is investigated in Deaf ASL signers in Study 2, and in three additional groups in Study 3: Deaf signers of German Sign Language (DGS, hearing non-signers from the United States, and hearing non-signers from Germany. Results of Studies 2 and 3 show that the ASL group performs significantly better than the other 3 groups and that all groups perform above chance for all meaning types in comprehension. Language-specific knowledge, therefore, has a significant effect on identifying imperatives based on targeted cues. Command has the most cues associated with it and is the

  19. Effects of Iconicity and Semantic Relatedness on Lexical Access in American Sign Language

    Science.gov (United States)

    Bosworth, Rain G.; Emmorey, Karen

    2010-01-01

    Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…

  20. Children creating language: how Nicaraguan sign language acquired a spatial grammar.

    Science.gov (United States)

    Senghas, A; Coppola, M

    2001-07-01

    It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.

  1. Language and Literacy Acquisition through Parental Mediation in American Sign Language

    Science.gov (United States)

    Bailes, Cynthia Neese; Erting, Lynne C.; Thumann-Prezioso, Carlene; Erting, Carol J.

    2009-01-01

    This longitudinal case study examined the language and literacy acquisition of a Deaf child as mediated by her signing Deaf parents during her first three years of life. Results indicate that the parents' interactions with their child were guided by linguistic and cultural knowledge that produced an intuitive use of child-directed signing (CDSi)…

  2. Proactive Interference & Language Change in Hearing Adult Students of American Sign Language.

    Science.gov (United States)

    Hoemann, Harry W.; Kreske, Catherine M.

    1995-01-01

    Describes a study that found, contrary to previous reports, that a strong, symmetrical release from proactive interference (PI) is the normal outcome for switches between American Sign Language (ASL) signs and English words and with switches between Manual and English alphabet characters. Subjects were college students enrolled in their first ASL…

  3. Neural correlates of British sign language comprehension: spatial processing demands of topographic language.

    Science.gov (United States)

    MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J

    2002-10-01

    In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.

  4. Iconicity as a general property of language: evidence from spoken and signed languages

    Directory of Open Access Journals (Sweden)

    Pamela Perniss

    2010-12-01

    Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.

  5. Language Justice for Sign Language Peoples: The UN Convention on the Rights of Persons with Disabilities

    Science.gov (United States)

    Batterbury, Sarah C. E.

    2012-01-01

    Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…

  6. South African Sign Language and language-in-education policy in ...

    African Journals Online (AJOL)

    KATEVG

    As this passage suggests, there is extensive and growing literature, both in .... For instance, sign language mediates experience in a unique way, as of ..... entail Deaf students studying together, in a setting not unlike that provided by residential .... of ASL as a foreign language option in secondary schools and universities.

  7. Sign languages and the Common European Framework of Reference for Languages : Descriptors and approaches to assessment

    NARCIS (Netherlands)

    L. Leeson; Dr. Beppie van den Bogaerde; Tobias Haug; C. Rathmann

    2015-01-01

    This resource establishes European standards for sign languages for professional purposes in line with the Common European Framework of Reference for Languages (CEFR) and provides an overview of assessment descriptors and approaches. Drawing on preliminary work undertaken in adapting the CEFR to

  8. Continuous Chinese sign language recognition with CNN-LSTM

    Science.gov (United States)

    Yang, Su; Zhu, Qing

    2017-07-01

    The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.

  9. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands.

    Science.gov (United States)

    Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan

    2011-01-01

    The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.

  10. Child Modifiability as a Predictor of Language Abilities in Deaf Children Who Use American Sign Language.

    Science.gov (United States)

    Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary

    2015-08-01

    This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.

  11. Generation of Signs within Semantic and Phonological Categories: Data from Deaf Adults and Children Who Use American Sign Language

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.

    2017-01-01

    Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…

  12. Monitoring Different Phonological Parameters of Sign Language Engages the Same Cortical Language Network but Distinctive Perceptual Ones.

    Science.gov (United States)

    Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary

    2016-01-01

    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.

  13. How to describe mouth patterns in the Danish Sign Language Dictionary

    DEFF Research Database (Denmark)

    Kristoffersen, Jette Hedegaard; Boye Niemela, Janne

    2008-01-01

    The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...

  14. Semantic Fluency in Deaf Children Who Use Spoken and Signed Language in Comparison with Hearing Peers

    Science.gov (United States)

    Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.

    2018-01-01

    Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…

  15. Medical Signbank as a Model for Sign Language Planning? A Review of Community Engagement

    Science.gov (United States)

    Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor

    2015-01-01

    This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…

  16. Numeral-Incorporating Roots in Numeral Systems: A Comparative Analysis of Two Sign Languages

    Science.gov (United States)

    Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca

    2010-01-01

    Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…

  17. South African sign language human-computer interface in the context of the national accessibility portal

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2006-02-01

    Full Text Available example, between a deaf person who can sign and an able person or a person with a different disability who cannot sign). METHODOLOGY A signing avatar is set up to work together with a chatterbot. The chatterbot is a natural language dialogue interface... are then offered in sign language as the replies are interpreted by a signing avatar, a living character that can reproduce human-like gestures and expressions. To make South African Sign Language (SASL) available digitally, computational models of the language...

  18. Lexical Variation and Change in British Sign Language

    Science.gov (United States)

    Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy

    2014-01-01

    This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region. PMID:24759673

  19. A Proposed Pedagogical Mobile Application for Learning Sign Language

    Directory of Open Access Journals (Sweden)

    Samir Abou El-Seoud

    2013-01-01

    Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.

  20. Multimodal semantic quantity representations: further evidence from Korean Sign Language

    Directory of Open Access Journals (Sweden)

    Frank eDomahs

    2012-01-01

    Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.

  1. Legal and Ethical Imperatives for Using Certified Sign Language Interpreters in Health Care Settings: How to "Do No Harm" When "It's (All) Greek" (Sign Language) to You.

    Science.gov (United States)

    Nonaka, Angela M

    2016-09-01

    Communication obstacles in health care settings adversely impact patient-practitioner interactions by impeding service efficiency, reducing mutual trust and satisfaction, or even endangering health outcomes. When interlocutors are separated by language, interpreters are required. The efficacy of interpreting, however, is constrained not just by interpreters' competence but also by health care providers' facility working with interpreters. Deaf individuals whose preferred form of communication is a signed language often encounter communicative barriers in health care settings. In those environments, signing Deaf people are entitled to equal communicative access via sign language interpreting services according to the Americans with Disabilities Act and Executive Order 13166, the Limited English Proficiency Initiative. Yet, litigation in states across the United States suggests that individual and institutional providers remain uncertain about their legal obligations to provide equal communicative access. This article discusses the legal and ethical imperatives for using professionally certified (vs. ad hoc) sign language interpreters in health care settings. First outlining the legal terrain governing provision of sign language interpreting services, the article then describes different types of "sign language" (e.g., American Sign Language vs. manually coded English) and different forms of "sign language interpreting" (e.g., interpretation vs. transliteration vs. translation; simultaneous vs. consecutive interpreting; individual vs. team interpreting). This is followed by reviews of the formal credentialing process and of specialized forms of sign language interpreting-that is, certified deaf interpreting, trilingual interpreting, and court interpreting. After discussing practical steps for contracting professional sign language interpreters and addressing ethical issues of confidentiality, this article concludes by offering suggestions for working more effectively

  2. Asset Management of Roadway Signs Through Advanced Technology

    Science.gov (United States)

    2003-06-01

    This research project aims to ease the process of Roadway Sign asset management. The project utilized handheld computer and global positioning system (GPS) technology to capture sign location data along with a timestamp. This data collection effort w...

  3. DAISY, the best way to author sign language publications

    CSIR Research Space (South Africa)

    Olivrin, G

    2009-09-01

    Full Text Available /detail.shtml?i=41 Eberius, Wolfram (2008): Multimodale Erwiterung Und Distribution Von Digital Talking Books. Germany: Technische universität Dresden. Fédération Internationale de Football Association (2008): Laws of the Game 2008/2009. Switzerland: FIFA... are further discussed that will influence the design of future DAISY standards. 2.1 Creation of Sign Language Content To create a full-text/full-audio and full-text/full-video DAISY test-book, the original content of “Laws of the Game 2008/2009” (FIFA...

  4. The "SignOn"-Model for Teaching Written Language to Deaf People

    Directory of Open Access Journals (Sweden)

    Marlene Hilzensauer

    2012-08-01

    Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.

  5. The Effects of Electronic Communication on American Sign Language

    Science.gov (United States)

    Schneider, Erin; Kozak, L. Viola; Santiago, Roberto; Stephen, Anika

    2012-01-01

    Technological and language innovation often flow in concert with one another. Casual observation by researchers has shown that electronic communication memes, in the form of abbreviations, have found their way into spoken English. This study focuses on the current use of electronic modes of communication, such as cell smartphones, and e-mail, and…

  6. Methodological and Theoretical Issues in the Adaptation of Sign Language Tests: An Example from the Adaptation of a Test to German Sign Language

    Science.gov (United States)

    Haug, Tobias

    2012-01-01

    Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…

  7. Emergency Department utilization among Deaf American Sign Language users.

    Science.gov (United States)

    McKee, Michael M; Winters, Paul C; Sen, Ananda; Zazove, Philip; Fiscella, Kevin

    2015-10-01

    Deaf American Sign Language (ASL) users comprise a linguistic minority population with poor health care access due to communication barriers and low health literacy. Potentially, these health care barriers could increase Emergency Department (ED) use. To compare ED use between deaf and non-deaf patients. A retrospective cohort from medical records. The sample was derived from 400 randomly selected charts (200 deaf ASL users and 200 hearing English speakers) from an outpatient primary care health center with a high volume of deaf patients. Abstracted data included patient demographics, insurance, health behavior, and ED use in the past 36 months. Deaf patients were more likely to be never smokers and be insured through Medicaid. In an adjusted analysis, deaf individuals were significantly more likely to use the ED (odds ratio [OR], 1.97; 95% confidence interval [CI], 1.11-3.51) over the prior 36 months. Deaf American Sign Language users appear to be at greater odds for elevated ED utilization when compared to the general hearing population. Efforts to further understand the drivers for increased ED utilization among deaf ASL users are much needed. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Assessing Health Literacy in Deaf American Sign Language Users

    Science.gov (United States)

    McKee, Michael M.; Paasche-Orlow, Michael; Winters, Paul C.; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas

    2015-01-01

    Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, healthcare messages, and health care communication, which when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version of the NVS (ASL-NVS). Forty-eight percent of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research. PMID:26513036

  9. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    Directory of Open Access Journals (Sweden)

    M. Al-Rousan

    2005-08-01

    Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  10. First languages and las technologies for education

    Directory of Open Access Journals (Sweden)

    Julio VERA VILA

    2013-12-01

    Full Text Available This article is a reflection on how each human being’s learning process and the cultural development of our species are connected to the possibility of translating reality –what we think, what we feel, our interaction- a system of signs that, having shared meanings, enrich our intrapersonal and interpersonal communication. Spoken language was the first technology but being well prepared genetically for it, we learn it through immersion; the rest of them, from written language to hypermedia, have to be well taught and even better learned.We conclude by highlighting the necessity of taking advantage of the benefits provided by the new technologies available nowadays in order to overcome the digital divide, without forgetting others such as literacy acquisition, which are the base of new technologies. Therefore we need a theory and practice of education which comprises its complexity and avoids simplistic reductionism.  

  11. Language Technologies for Lifelong Learning

    NARCIS (Netherlands)

    Greller, Wolfgang

    2011-01-01

    Greller, W. (2010). Language Technologies for Lifelong Learning. In S. Trausan-Matu & P. Dessus (Eds.), Proceedings of the Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity. Second Internationl Workshop - NLPSL 2010 (pp. 6-8). September, 14, 2010, Bucharest,

  12. JOURNAL OF LANGUAGE, TECHNOLOGY ...

    African Journals Online (AJOL)

    Frederick Iraki

    The students learning English as a foreign language sometimes enjoy computer .... Motivation done by Ahangari & Ghalami Nobar (2012), it was found that the modern world of the ..... Journal of Academic and Applied Studies, 2(1), 39-61.

  13. Prediction in a visual language: real-time sentence processing in American Sign Language across development.

    Science.gov (United States)

    Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I

    2018-01-01

    Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

  14. Atypical Speech and Language Development: A Consensus Study on Clinical Signs in the Netherlands

    Science.gov (United States)

    Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.

    2017-01-01

    Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…

  15. Designing an American Sign Language Avatar for Learning Computer Science Concepts for Deaf or Hard-of-Hearing Students and Deaf Interpreters

    Science.gov (United States)

    Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia

    2013-01-01

    The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…

  16. The Link between Form and Meaning in American Sign Language: Lexical Processing Effects

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2009-01-01

    Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…

  17. Comic Books: A Learning Tool for Meaningful Acquisition of Written Sign Language

    Science.gov (United States)

    Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.

    2018-01-01

    Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…

  18. Standardizing Chinese Sign Language for Use in Post-Secondary Education

    Science.gov (United States)

    Lin, Christina Mien-Chun; Gerner de Garcia, Barbara; Chen-Pichler, Deborah

    2009-01-01

    There are over 100 languages in China, including Chinese Sign Language. Given the large population and geographical dispersion of the country's deaf community, sign variation is to be expected. Language barriers due to lexical variation may exist for deaf college students in China, who often live outside their home regions. In presenting an…

  19. Sign Language Translator Application Using OpenCV

    Science.gov (United States)

    Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.

    2018-03-01

    This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.

  20. Tools for language: patterned iconicity in sign language nouns and verbs.

    Science.gov (United States)

    Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon

    2015-01-01

    When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.

  1. The Road to Language Learning Is Not Entirely Iconic: Iconicity, Neighborhood Density, and Frequency Facilitate Acquisition of Sign Language.

    Science.gov (United States)

    Caselli, Naomi K; Pyers, Jennie E

    2017-07-01

    Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.

  2. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  3. Bimodal bilingualism as multisensory training?: Evidence for improved audiovisual speech perception after sign language exposure.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-15

    The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Autonomous Language Learning with Technology

    Science.gov (United States)

    Forsythe, Edward

    2013-01-01

    Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) wants English language education to be more communicative. Japanese teachers of English (JTEs) need to adapt their instructional practices to meet this goal; however, they may not feel confident enough to teach speaking themselves. Using technology, JTEs have the ability…

  5. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI)

    OpenAIRE

    Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik

    2014-01-01

    Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnos...

  6. Deaf New Zealand Sign Language users' access to healthcare.

    Science.gov (United States)

    Witko, Joanne; Boyles, Pauline; Smiler, Kirsten; McKee, Rachel

    2017-12-01

    The research described was undertaken as part of a Sub-Regional Disability Strategy 2017-2022 across the Wairarapa, Hutt Valley and Capital and Coast District Health Boards (DHBs). The aim was to investigate deaf New Zealand Sign Language (NZSL) users' quality of access to health services. Findings have formed the basis for developing a 'NZSL plan' for DHBs in the Wellington sub-region. Qualitative data was collected from 56 deaf participants and family members about their experiences of healthcare services via focus group, individual interviews and online survey, which were thematically analysed. Contextual perspective was gained from 57 healthcare professionals at five meetings. Two professionals were interviewed, and 65 staff responded to an online survey. A deaf steering group co-designed the framework and methods, and validated findings. Key issues reported across the health system include: inconsistent interpreter provision; lack of informed consent for treatment via communication in NZSL; limited access to general health information in NZSL and the reduced ability of deaf patients to understand and comply with treatment options. This problematic communication with NZSL users echoes international evidence and other documented local evidence for patients with limited English proficiency. Deaf NZSL users face multiple barriers to equitable healthcare, stemming from linguistic and educational factors and inaccessible service delivery. These need to be addressed through policy and training for healthcare personnel that enable effective systemic responses to NZSL users. Deaf participants emphasise that recognition of their identity as members of a language community is central to improving their healthcare experiences.

  7. Social Interaction Affects Neural Outcomes of Sign Language Learning As a Foreign Language in Adults.

    Science.gov (United States)

    Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta

    2017-01-01

    Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.

  8. Signing Earth Science: Accommodations for Students Who Are Deaf or Hard of Hearing and Whose First Language Is Sign

    Science.gov (United States)

    Vesel, J.; Hurdich, J.

    2014-12-01

    TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.

  9. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language

    Science.gov (United States)

    Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.

    2014-01-01

    To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language

  10. Towards a Sign Language Synthesizer: a Bridge to Communication Gap of the Hearing/Speech Impaired Community

    Science.gov (United States)

    Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.

    2013-12-01

    Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.

  11. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    Science.gov (United States)

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  12. Reproducing American Sign Language Sentences: Cognitive Scaffolding in Working Memory

    Directory of Open Access Journals (Sweden)

    Ted eSupalla

    2014-08-01

    Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are

  13. Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields

    Directory of Open Access Journals (Sweden)

    Hee-Deok Yang

    2014-12-01

    Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.

  14. The effect of sign language structure on complex word reading in Chinese deaf adolescents.

    Science.gov (United States)

    Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X

    2015-01-01

    The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.

  15. The effect of sign language structure on complex word reading in Chinese deaf adolescents.

    Directory of Open Access Journals (Sweden)

    Aitao Lu

    Full Text Available The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words, in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2, compound words with one sign (CW-1, and compound words with two signs (CW-2, but not in derivational words with one sign (DW-1, with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.

  16. COMPARATIVE ANALYSIS OF THE STRUCTURE OF THE AMERICAN AND MACEDONIAN SIGN LANGUAGE

    Directory of Open Access Journals (Sweden)

    Aleksandra KAROVSKA RISTOVSKA

    2014-09-01

    Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.

  17. Health Websites: Accessibility and Usability for American Sign Language Users

    Science.gov (United States)

    Kushalnagar, Poorna; Naturale, Joan; Paludneviciene, Raylene; Smith, Scott R.; Werfel, Emily; Doolittle, Richard; Jacobs, Stephen; DeCaro, James

    2015-01-01

    To date, there have been efforts towards creating better health information access for Deaf American Sign Language (ASL) users. However, the usability of websites with access to health information in ASL has not been evaluated. Our paper focuses on the usability of four health websites that include ASL videos. We seek to obtain ASL users’ perspectives on the navigation of these ASL-accessible websites, finding the health information that they needed, and perceived ease of understanding ASL video content. ASL users (N=32) were instructed to find specific information on four ASL-accessible websites, and answered questions related to: 1) navigation to find the task, 2) website usability, and 3) ease of understanding ASL video content for each of the four websites. Participants also gave feedback on what they would like to see in an ASL health library website, including the benefit of added captioning and/or signer model to medical illustration of health videos. Participants who had lower health literacy had greater difficulty in finding information on ASL-accessible health websites. This paper also describes the participants’ preferences for an ideal ASL-accessible health website, and concludes with a discussion on the role of accessible websites in promoting health literacy in ASL users. PMID:24901350

  18. Arabic sign language recognition based on HOG descriptor

    Science.gov (United States)

    Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid

    2017-02-01

    We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.

  19. Accessibility perspectives on enabling South African sign language in the South African National Accessibility Portal

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-04-01

    Full Text Available and services. One such mechanism is by embedding animated Sign Language in Web pages. This paper analyses the effectiveness and appropriateness of using this approach by embedding South African Sign Language in the South African National Accessibility Portal...

  20. The non- (existent) native signer: sign language research in a small deaf population

    NARCIS (Netherlands)

    Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,

    2008-01-01

    This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born

  1. Lexical Properties of Slovene Sign Language: A Corpus-Based Study

    Science.gov (United States)

    Vintar, Špela

    2015-01-01

    Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…

  2. Comprehending Sentences with the Body: Action Compatibility in British Sign Language?

    Science.gov (United States)

    Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella

    2017-01-01

    Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion…

  3. Constructing an Online Test Framework, Using the Example of a Sign Language Receptive Skills Test

    Science.gov (United States)

    Haug, Tobias; Herman, Rosalind; Woll, Bencie

    2015-01-01

    This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…

  4. Deaf Students' Receptive and Expressive American Sign Language Skills: Comparisons and Relations

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2014-01-01

    This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…

  5. The Link between Form and Meaning in British Sign Language: Effects of Iconicity for Phonological Decisions

    Science.gov (United States)

    Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella

    2010-01-01

    Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…

  6. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases.

    Science.gov (United States)

    Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland

    2015-05-12

    According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.

  7. A Case of Specific Language Impairment in a Deaf Signer of American Sign Language.

    Science.gov (United States)

    Quinto-Pozos, David; Singleton, Jenny L; Hauser, Peter C

    2017-04-01

    This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3.5-year period (15;10-19;6) during which we collected linguistic and neuropsychological data. Results revealed that this student has average intelligence, intact visual perceptual skills, visuospatial skills, and motor skills but demonstrates challenges with some memory and sequential processing tasks. Scores from ASL testing signaled language impairment and marked difficulty with fingerspelling. The student also had significant deficits in English vocabulary, spelling, reading comprehension, reading fluency, and writing. Accepted SLI diagnostic criteria exclude deaf individuals from an SLI diagnosis, but the authors propose modified criteria in this work. The results of this study have practical implications for professionals including school psychologists, speech language pathologists, and ASL specialists. The results also support the theoretical argument that SLI can be evident regardless of the modality in which it is communicated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Towards a Transcription System of Sign Language for 3D Virtual Agents

    Science.gov (United States)

    Do Amaral, Wanessa Machado; de Martino, José Mario

    Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

  9. A Kinect-Based Sign Language Hand Gesture Recognition System for Hearing- and Speech-Impaired: A Pilot Study of Pakistani Sign Language.

    Science.gov (United States)

    Halim, Zahid; Abbas, Ghulam

    2015-01-01

    Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.

  10. Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence.

    Science.gov (United States)

    Parton, Becky Sue

    2006-01-01

    In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.

  11. ERP correlates of German Sign Language processing in deaf native signers.

    Science.gov (United States)

    Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte

    2014-05-10

    The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.

  12. Sexual health behaviors of Deaf American Sign Language (ASL) users.

    Science.gov (United States)

    Heiman, Erica; Haynes, Sharon; McKee, Michael

    2015-10-01

    Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. We sought to characterize the self-reported sexual behaviors of Deaf individuals. Responses from 282 Deaf participants aged 18-64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N = 1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs 58.1%), lower-income Deaf (44.4% vs 69.7%) and among less educated Deaf (31.3% vs 57.7%) than among respondents from corresponding general population groups. Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Sexual Health Behaviors of Deaf American Sign Language (ASL) Users

    Science.gov (United States)

    Heiman, Erica; Haynes, Sharon; McKee, Michael

    2015-01-01

    Background Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. Objective We sought to characterize the self-reported sexual behaviors of Deaf individuals. Methods Responses from 282 Deaf participants aged 18–64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N=1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Results Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs. 58.1%), lower-income Deaf (44.4% vs. 69.7%) and among less educated Deaf (31.3% vs. 57.7%) than among respondents from corresponding general population groups. Conclusion Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower-income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. PMID:26242551

  14. The Subsystem of Numerals in Catalan Sign Language: Description and Examples from a Psycholinguistic Study

    Science.gov (United States)

    Fuentes, Mariana; Tolchinsky, Liliana

    2004-01-01

    Linguistic descriptions of sign languages are important to the recognition of their linguistic status. These languages are an essential part of the cultural heritage of the communities that create and use them and vital in the education of deaf children. They are also the reference point in language acquisition studies. Ours is exploratory…

  15. How Deaf American Sign Language/English Bilingual Children Become Proficient Readers: An Emic Perspective

    Science.gov (United States)

    Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.

    2014-01-01

    A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…

  16. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    Science.gov (United States)

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A Kinect based sign language recognition system using spatio-temporal features

    Science.gov (United States)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  18. Input processing at first exposure to a sign language

    NARCIS (Netherlands)

    Ortega, G.; Morgan, G.

    2015-01-01

    There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult

  19. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    Science.gov (United States)

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to

  20. Neural systems supporting linguistic structure, linguistic experience, and symbolic communication in sign language and gesture.

    Science.gov (United States)

    Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne

    2015-09-15

    Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.

  1. PROPOSING A LANGUAGE EXPERIENCE AND SELF-ASSESSMENT OF PROFICIENCY QUESTIONNAIRE FOR BILINGUAL BRAZILIAN SIGN LANGUAGE/PORTUGUESE HEARING TEACHERS

    Directory of Open Access Journals (Sweden)

    Ingrid FINGER

    2014-12-01

    Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.

  2. Lexical prediction via forward models: N400 evidence from German Sign Language.

    Science.gov (United States)

    Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-09-01

    Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.

  3. Using the "Common European Framework of Reference for Languages" to Teach Sign Language to Parents of Deaf Children

    Science.gov (United States)

    Snoddon, Kristin

    2015-01-01

    No formal Canadian curriculum presently exists for teaching American Sign Language (ASL) as a second language to parents of deaf and hard of hearing children. However, this group of ASL learners is in need of more comprehensive, research-based support, given the rapid expansion in Canada of universal neonatal hearing screening and the…

  4. The Relationship between American Sign Language Vocabulary and the Development of Language-Based Reasoning Skills in Deaf Children

    Science.gov (United States)

    Henner, Jonathan

    2016-01-01

    The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…

  5. Recognition of sign language with an inertial sensor-based data glove.

    Science.gov (United States)

    Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam

    2015-01-01

    Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.

  6. Everyday activities and social contacts among older deaf sign language users

    DEFF Research Database (Denmark)

    Werngren-Elgström, Monica; Brandt, Ase; Iwarsson, Susanne

    2006-01-01

    The purpose of this study was to describe the everyday activities and social contacts among older deaf sign language users, and to investigate relationships between these phenomena and the health and well-being within this group. The study population comprised deaf sign language users, 65 years...... or older, in Sweden. Data collection was based on interviews in sign language, including open-ended questions covering everyday activities and social contacts as well as self-rated instruments measuring aspects of health and subjective well-being. The results demonstrated that the group of participants...... aspects of health and subjective well-being and the frequency of social contacts with family/relatives or visiting the deaf club and meeting friends. It is concluded that the variety of activities at the deaf clubs are important for the subjective well-being of older deaf sign language users. Further...

  7. Independent transmission of sign language interpreter in DVB: assessment of image compression

    Science.gov (United States)

    Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡

    2015-02-01

    Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.

  8. Deaf leaders’ strategies for working with signed language interpreters: An examination across seven countries.

    NARCIS (Netherlands)

    Haug, T.; Bontempo, K.; Leeson, L.; Napier, J.; Nicodemus, B.; Van den Bogaerde, B.; Vermeerbergen, M.

    In this paper, we report interview data from 14 Deaf leaders across seven countries (Australia, Belgium, Ireland, the Netherlands, Switzerland, the United Kingdom, and the United States) regarding their perspectives on signed language interpreters. Using a semistructured survey questionnaire, seven

  9. HIV/AIDS knowledge among adolescent sign-language users in ...

    African Journals Online (AJOL)

    , particularly sign language users, in HIV-prevention programmes. Keywords: communication, disability, disability studies, hearing impairment, qualitative research, scoping study. African Journal of AIDS Research 2010, 9(3): 307–313 ...

  10. Making an Online Dictionary of New Zealand Sign Language*

    African Journals Online (AJOL)

    of a digital medium and an existing body of descriptive research on the language, ... ing lexemes and word class in a polysynthetic language, deriving usage ..... higher education, white-collar occupations, the arts, media, and political advo- cacy. ..... and Niemalä explain that if mouth patterns are treated as a formational ele-.

  11. IAEA and International Science and Technology Center sign cooperative agreement

    International Nuclear Information System (INIS)

    2008-01-01

    Full text: The IAEA and the International Science and Technology Center (ISTC) today signed an agreement that calls for an increase in cooperation between the two organizations. The memorandum of understanding seeks to amplify their collaboration in the research and development of applications and technology that could contribute to the IAEA's activities in the fields of verification and nuclear security, including training and capacity building. IAEA Safeguards Director of Technical Support Nikolay Khlebnikov and ISTC Executive Director Adriaan van der Meer signed the Agreement at IAEA headquarters in Vienna on 22 October 2008. (IAEA)

  12. Three-dimensional grammar in the brain: Dissociating the neural correlates of natural sign language and manually coded spoken language.

    Science.gov (United States)

    Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł

    2015-05-01

    In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  14. Music and Sign Language to Promote Infant and Toddler Communication and Enhance Parent-Child Interaction

    Science.gov (United States)

    Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne

    2014-01-01

    The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…

  15. Testing Comprehension Abilities in Users of British Sign Language Following Cva

    Science.gov (United States)

    Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.

    2005-01-01

    Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…

  16. The influence of the visual modality on language structure and conventionalization: insights from sign language and gesture.

    Science.gov (United States)

    Perniss, Pamela; Özyürek, Asli; Morgan, Gary

    2015-01-01

    For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.

  17. Language Testing and Technology: Past and Future.

    Science.gov (United States)

    Chalhoub-Deville, Micheline

    2001-01-01

    Reflects on what has transpired in the second language (L2) testing field in relation to technology and situates developments within the larger language testing, general measurement, and educational contexts. (Author/VWL)

  18. Health care accessibility and the role of sign language interpreters

    NARCIS (Netherlands)

    van den Bogaerde, B.; de Lange, R.; Nicodemus, B.; Metzger, M.

    2014-01-01

    In healthcare, the accuracy of interpretation is the most critical component of safe and effective communication between providers and patients in medical settings characterized by language and cultural barriers. Although medical education should prepare healthcare providers for common issues they

  19. A Barking Dog That Never Bites? The British Sign Language (Scotland) Bill

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…

  20. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    Science.gov (United States)

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  1. Role of sign language in intellectual and social development of deaf children: Review of foreign publications

    Directory of Open Access Journals (Sweden)

    Khokhlova A. Yu.

    2014-12-01

    Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.

  2. Assessment of Sign Language Development: The Case of Deaf Children in the Netherlands

    NARCIS (Netherlands)

    Hermans, D.; Knoors, H.E.T.; Verhoeven, L.T.W.

    2009-01-01

    In this article, we will describe the development of an assessment instrument for Sign Language of the Netherlands (SLN) for deaf children in bilingual education programs. The assessment instrument consists of nine computerized tests in which the receptive and expressive language skills of deaf

  3. Cross-Linguistic Differences in the Neural Representation of Human Language: Evidence from Users of Signed Languages

    Science.gov (United States)

    Corina, David P.; Lawyer, Laurel A.; Cates, Deborah

    2013-01-01

    Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624

  4. Exploring the use of dynamic language assessment with deaf children, who use American Sign Language: Two case studies.

    Science.gov (United States)

    Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary

    2014-01-01

    We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Sign Language Translation in State Administration in Germany: Barrier Free Web Accessibility

    OpenAIRE

    Lišková, Kateřina

    2014-01-01

    The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...

  6. Static sign language recognition using 1D descriptors and neural networks

    Science.gov (United States)

    Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César

    2012-10-01

    A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.

  7. Assessing language skills in adult key word signers with intellectual disabilities: Insights from sign linguistics.

    Science.gov (United States)

    Grove, Nicola; Woll, Bencie

    2017-03-01

    Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. On language acquisition in speech and sign:development drives combinatorial structure in both modalities

    Directory of Open Access Journals (Sweden)

    Gary eMorgan

    2014-11-01

    Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.

  9. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    Science.gov (United States)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  10. A Particle of Indefiniteness in American Sign Language

    Directory of Open Access Journals (Sweden)

    Carol Neidle

    2003-01-01

    Full Text Available We describe here the characteristics of a very frequently-occurring ASL indefinite focus particle, which has not previously been recognized as such. We show here that, despite its similarity to the question sign "WHAT", the particle is distinct from that sign in terms of articulation, function, and distribution. The particle serves to express "uncertainty" in various ways, which can be formalized semantically in terms of a domain-widening effect of the same sort as that proposed for English "any" by Kadmon & Landman (1993. Its function is to widen the domain of possibilities under consideration from the typical to include the non-typical as well, along a dimension appropriate in the context.

  11. Referential shift in Nicaraguan Sign Language: a transition from lexical to spatial devices.

    Science.gov (United States)

    Kocab, Annemarie; Pyers, Jennie; Senghas, Ann

    2014-01-01

    Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.

  12. Using the Hands to Represent Objects in Space: Gesture as a Substrate for Signed Language Acquisition.

    Science.gov (United States)

    Janke, Vikki; Marshall, Chloë R

    2017-01-01

    An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the

  13. The Importance of Early Sign Language Acquisition for Deaf Readers

    Science.gov (United States)

    Clark, M. Diane; Hauser, Peter C.; Miller, Paul; Kargin, Tevhide; Rathmann, Christian; Guldenoglu, Birkan; Kubus, Okan; Spurgeon, Erin; Israel, Erica

    2016-01-01

    Researchers have used various theories to explain deaf individuals' reading skills, including the dual route reading theory, the orthographic depth theory, and the early language access theory. This study tested 4 groups of children--hearing with dyslexia, hearing without dyslexia, deaf early signers, and deaf late signers (N = 857)--from 4…

  14. Real-time lexical comprehension in young children learning American Sign Language.

    Science.gov (United States)

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  15. American Sign Language Syntax and Analogical Reasoning Skills Are Influenced by Early Acquisition and Age of Entry to Signing Schools for the Deaf.

    Science.gov (United States)

    Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert

    2016-01-01

    Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.

  16. Technology and English Language Teaching (ELT)

    Science.gov (United States)

    Kazzemi, Akram; Narafshan, Mehry Haddad

    2014-01-01

    This paper is a try to investigate the attitudes of English language university teachers in Kerman (Iran) toward computer technology and find the hidden factors that make university teachers avoid using technology in English language teaching. 30 university teachers participated in this study. A questionnaire and semi-structured interview were…

  17. Media, Information Technology, and Language Planning: What Can Endangered Language Communities Learn from Created Language Communities?

    Science.gov (United States)

    Schreyer, Christine

    2011-01-01

    The languages of Klingon and Na'vi, both created for media, are also languages that have garnered much media attention throughout the course of their existence. Speakers of these languages also utilize social media and information technologies, specifically websites, in order to learn the languages and then put them into practice. While teaching a…

  18. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children.

    Science.gov (United States)

    Hall, Wyatte C

    2017-05-01

    A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.

  19. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  20. Graph theoretical analysis of functional network for comprehension of sign language.

    Science.gov (United States)

    Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng

    2017-09-15

    Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Vocabulary Instruction through Books Read in American Sign Language for English-Language Learners with Hearing Loss

    Science.gov (United States)

    Cannon, Joanna E.; Fredrick, Laura D.; Easterbrooks, Susan R.

    2010-01-01

    Reading to children improves vocabulary acquisition through incidental exposure, and it is a best practice for parents and teachers of children who can hear. Children who are deaf or hard of hearing are at risk for not learning vocabulary as such. This article describes a procedure for using books read on DVD in American Sign Language with…

  2. Visual Iconicity Across Sign Languages: Large-Scale Automated Video Analysis of Iconic Articulators and Locations

    Science.gov (United States)

    Östling, Robert; Börstell, Carl; Courtaux, Servane

    2018-01-01

    We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.

  3. Deaf children attending different school environments: sign language abilities and theory of mind.

    Science.gov (United States)

    Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia

    2013-01-01

    The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.

  4. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    International Nuclear Information System (INIS)

    Vargas, Lorena P; Barba, Leiner; Torres, C O; Mattos, L

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  5. Translation and interpretation of sign language in the postgraduate context: problematizing positions

    Directory of Open Access Journals (Sweden)

    Luiz Daniel Rodrigues Dinarte

    2015-12-01

    Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.

  6. Dissociating linguistic and non-linguistic gesture processing: electrophysiological evidence from American Sign Language.

    Science.gov (United States)

    Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David

    2012-04-01

    A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI).

    Science.gov (United States)

    Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik

    2014-05-22

    There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen's kappa and "observed agreement". There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and "observed agreement" was above 88%. The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language.

  8. Psychometric properties of a sign language version of the Mini International Neuropsychiatric Interview (MINI)

    Science.gov (United States)

    2014-01-01

    Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen’s kappa and “observed agreement”. Results There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and “observed agreement” was above 88%. Conclusion The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language. PMID:24886297

  9. Prior knowledge of deaf students fluent in brazilian sign languages regarding the algebraic language in high school

    Directory of Open Access Journals (Sweden)

    Silvia Teresinha Frizzarini

    2014-06-01

    Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.

  10. Mexican sign language recognition using normalized moments and artificial neural networks

    Science.gov (United States)

    Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita

    2014-09-01

    This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.

  11. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study.

    Science.gov (United States)

    Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D

    2016-02-01

    The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Variation in handshape and orientation in British Sign Language: The case of the ‘1’ hand configuration

    Science.gov (United States)

    Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy

    2013-01-01

    This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018

  13. Making Cancer Health Text on the Internet Easier to Read for Deaf People Who Use American Sign Language.

    Science.gov (United States)

    Kushalnagar, Poorna; Smith, Scott; Hopper, Melinda; Ryan, Claire; Rinkevich, Micah; Kushalnagar, Raja

    2018-02-01

    People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.

  14. The Impact of Input Quality on Early Sign Development in Native and Non-Native Language Learners

    Science.gov (United States)

    Lu, Jenny; Jones, Anna; Morgan, Gary

    2016-01-01

    There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…

  15. Engaging the Discourse of International Language Recognition through ISO 639-3 Signed Language Change Requests

    Science.gov (United States)

    Parks, Elizabeth

    2015-01-01

    Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…

  16. The Beneficial Role of L1 Spoken Language Skills on Initial L2 Sign Language Learning: Cognitive and Linguistic Predictors of M2L2 Acquisition

    Science.gov (United States)

    Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.

    2017-01-01

    Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…

  17. Flusser and the "?" Sign: the musicality of poetry and the limits of language

    Directory of Open Access Journals (Sweden)

    Tiago Hermano Breunig

    2016-09-01

    Full Text Available When inquiring the sign “?”, Flusser postulates that meaning is “one of the main problems of the present times thought.” From the sign above, Flusser differentiates meaning and sense, which defines as “what means”. Thus, the problem of meaning converges with the problem of thought itself, since, according to Flusser, all thoughts come from a tautology, i.e., what “means nothing”. If the understanding of meaning implies the musical aspects of the language, as the sign “?”, according to Flusser, music falls “in the same abyss of tautology” as it overcomes the language limit. Flusser believes that the discussion of language limits contributes to the problem of the meaning of music and confesses that among all the existential signs the “?” is the one that articulates better the situation in which we are. It is in this sense, in this “Stimmung”, as Flusser says about the meaning of the sign “?”, that this paper aims to reflect, from the problem of meaning, on the relationship between music and poetry contemporary to Flusser.

  18. Application of Demand-Control Theory to Sign Language Interpreting: Implications for Stress and Interpreter Training.

    Science.gov (United States)

    Dean, Robyn K.; Pollard, Robert Q., Jr.

    2001-01-01

    This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…

  19. Sign Language as Medium of Instruction in Botswana Primary Schools: Voices from the Field

    Science.gov (United States)

    Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly

    2015-01-01

    This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…

  20. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    Science.gov (United States)

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  1. The Readiness of Typical Student in Communication By Using Sign Language in Hearing Impairment Integration Programe

    Directory of Open Access Journals (Sweden)

    Mohd Hanafi Mohd Yasin

    2018-05-01

    Full Text Available This research is regarding the readiness of typical student in communication by using sign language in Hearing Impairment Integration Programme. There were 60 typical students from a Special Education Integration Programme of secondary school in Malacca were chosen as research respondents. The instrument of the research was a set of questionnaire which consisted of four parts, namely Student’s demography (Part A, Student’s knowledge (Part B, Student’s ability to communicate (Part C and Student’s interest to communicate (Part D. The questionnaire was adapted from the research of Asnul Dahar and Rabiah's 'The Readiness of Students in Following Vocational Subjects at Jerantut District, Rural Secondary School in Pahang'.  Descriptive analysis was used to analysis the data. Mean score was used to determine the level of respondents' perception of each question. The findings showed a positive relationship between typical students towards communication medium by using sign language. Typical students were seen to be interested in communicating using sign language and were willing to attend the Sign Language class if offered.

  2. Creating a Digital Jamaican Sign Language Dictionary: A R2D2 Approach

    Science.gov (United States)

    MacKinnon, Gregory; Soutar, Iris

    2015-01-01

    The Jamaican Association for the Deaf, in their responsibilities to oversee education for individuals who are deaf in Jamaica, has demonstrated an urgent need for a dictionary that assists students, educators, and parents with the practical use of "Jamaican Sign Language." While paper versions of a preliminary resource have been explored…

  3. Longitudinal Receptive American Sign Language Skills across a Diverse Deaf Student Body

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.

    2016-01-01

    This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…

  4. Comparing the Picture Exchange Communication System and Sign Language Training for Children with Autism

    Science.gov (United States)

    Tincani, Matt

    2004-01-01

    This study compared the effects of Picture Exchange Communication System (PECS) and sign language training on the acquisition of mands (requests for preferred items) of students with autism. The study also examined the differential effects of each modality on students' acquisition of vocal behavior. Participants were two elementary school students…

  5. Educational Resources and Implementation of a Greek Sign Language Synthesis Architecture

    Science.gov (United States)

    Karpouzis, K.; Caridakis, G.; Fotinea, S.-E.; Efthimiou, E.

    2007-01-01

    In this paper, we present how creation and dynamic synthesis of linguistic resources of Greek Sign Language (GSL) may serve to support development and provide content to an educational multitask platform for the teaching of GSL in early elementary school classes. The presented system utilizes standard virtual character (VC) animation technologies…

  6. Kinect-based sign language recognition of static and dynamic hand movements

    Science.gov (United States)

    Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.

    2017-02-01

    A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.

  7. Where "Sign Language Studies" Has Led Us in Forty Years: Opening High School and University Education for Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation

    Science.gov (United States)

    Woodward, James; Hoa, Nguyen Thi

    2012-01-01

    This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…

  8. Response bias reveals enhanced attention to inferior visual field in signers of American Sign Language.

    Science.gov (United States)

    Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C

    2016-04-01

    Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.

  9. Journal of Language, Technology & Entrepreneurship in Africa

    African Journals Online (AJOL)

    Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 2 (2010) >. Log in or Register to get access to full text downloads.

  10. Journal of Language, Technology & Entrepreneurship in Africa

    African Journals Online (AJOL)

    Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 7, No 1 (2016) >. Log in or Register to get access to full text downloads.

  11. Journal of Language, Technology & Entrepreneurship in Africa

    African Journals Online (AJOL)

    Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 1, No 1 (2007) >. Log in or Register to get access to full text downloads.

  12. LITERARY LANGUAGE AS A SIGN. SEMIOTIC CONSIDERATIONS ON THE CROATIAN LANGUAGE IN THE CULTURAL SYSTEM

    Directory of Open Access Journals (Sweden)

    Maciej Czerwiński

    2011-01-01

    Full Text Available In the article the question of the existence of the Croatian literary language in the semiotic space, i.e. the system of culture, is taken into consideration. In order to affirm the idea of the justification of the very term Croatian language, and thus acceptance of the thesis of the existence of such a language, this argumentation is directed towards theoretical investigation in the semiotic field. There is an attempt to envisage that discussions in the post-Yugoslav linguistics are not the problem, conventionally speaking, ‘ontological’ but ‘epistemological’. Thus, it is not important the question whether the Croatian language or any other language, e.g. Montenegrin, exists but rather the following question: what does it mean that literary language exists or does not exist?

  13. The Effect of Sign Language Rehearsal on Deaf Subjects' Immediate and Delayed Recall of English Word Lists.

    Science.gov (United States)

    Bonvillian, John D.; And Others

    1987-01-01

    The relationship between sign language rehearsal and written free recall was examined by having deaf college students rehearse the sign language equivalents of printed English words. Studies of both immediate and delayed memory suggested that word recall increased as a function of total rehearsal frequency and frequency of appearance in rehearsal…

  14. Evidence for Website Claims about the Benefits of Teaching Sign Language to Infants and Toddlers with Normal Hearing

    Science.gov (United States)

    Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer

    2012-01-01

    The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…

  15. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  16. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  17. Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Directory of Open Access Journals (Sweden)

    Angga Rahagiyanto

    2017-07-01

    Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.

  18. The duplication of the number of hands in Sign Language, and its semantic effects

    Directory of Open Access Journals (Sweden)

    André Nogueira Xavier

    2015-07-01

    Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.

  19. A Human Mirror Neuron System for Language: Perspectives from Signed Languages of the Deaf

    Science.gov (United States)

    Knapp, Heather Patterson; Corina, David P.

    2010-01-01

    Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…

  20. Content validation: clarity/relevance, reliability and internal consistency of enunciative signs of language acquisition.

    Science.gov (United States)

    Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de

    2017-08-10

    To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.

  1. Immersive Technologies and Language Learning

    Science.gov (United States)

    Blyth, Carl

    2018-01-01

    This article briefly traces the historical conceptualization of linguistic and cultural immersion through technological applications, from the early days of locally networked computers to the cutting-edge technologies known as virtual reality and augmented reality. Next, the article explores the challenges of immersive technologies for the field…

  2. Functional and anatomical correlates of word-, sentence-, and discourse-level integration in sign language

    Directory of Open Access Journals (Sweden)

    Tomoo eInubushi

    2013-10-01

    Full Text Available In both vocal and sign languages, we can distinguish word-, sentence-, and discourse-level integration in terms of hierarchical processes, which integrate various elements into another higher level of constructs. In the present study, we used magnetic resonance imaging and voxel-based morphometry to test three language tasks in Japanese Sign Language (JSL: word-level (Word, sentence-level (Sent, and discourse-level (Disc decision tasks. We analyzed cortical activity and gray matter volumes of Deaf signers, and clarified three major points. First, we found that the activated regions in the frontal language areas gradually expanded in the dorso-ventral axis, corresponding to a difference in linguistic units for the three tasks. Moreover, the activations in each region of the frontal language areas were incrementally modulated with the level of linguistic integration. These dual mechanisms of the frontal language areas may reflect a basic organization principle of hierarchically integrating linguistic information. Secondly, activations in the lateral premotor cortex and inferior frontal gyrus were left-lateralized. Direct comparisons among the language tasks exhibited more focal activation in these regions, suggesting their functional localization. Thirdly, we found significantly positive correlations between individual task performances and gray matter volumes in localized regions, even when the ages of acquisition of JSL and Japanese were factored out. More specifically, correlations with the performances of the Word and Sent tasks were found in the left precentral/postcentral gyrus and insula, respectively, while correlations with those of the Disc task were found in the left ventral inferior frontal gyrus and precuneus. The unification of functional and anatomical studies would thus be fruitful for understanding human language systems from the aspects of both universality and individuality.

  3. South African human language technologies audit

    CSIR Research Space (South Africa)

    Grover, AS

    2010-05-01

    Full Text Available Human language technologies (HLT) can play a vital role in bridging the digital divide and thus the HLT field has been recognised as a priority area by the South African government. The authors present the work on conducting a technology audit...

  4. The emergence of embedded structure: insights from Kafr Qasem Sign Language

    Science.gov (United States)

    Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana

    2014-01-01

    This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step—and the crucial step—toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means. PMID:24917837

  5. The "handedness" of language: Directional symmetry breaking of sign usage in words.

    Science.gov (United States)

    Ashraf, Md Izhar; Sinha, Sitabhra

    2018-01-01

    Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon.

  6. The “handedness” of language: Directional symmetry breaking of sign usage in words

    Science.gov (United States)

    2018-01-01

    Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon. PMID:29342176

  7. SIGNS LANGUAGE IN GRAPHICAL COMMUNICATION MECHANICAL DRAWING: THE BASIC LANGUAGE IN GRAPHICAL COMMUNICATION

    Directory of Open Access Journals (Sweden)

    BADEA Florina

    2010-07-01

    Full Text Available The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.

  8. SIGNS LANGUAGE IN GRAPHICAL COMMUNICATION MECHANICAL DRAWING: THE BASIC LANGUAGE IN GRAPHICAL COMMUNICATION

    OpenAIRE

    BADEA Florina; PETRESCU Ligia

    2010-01-01

    The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.

  9. Real-Time Processing of ASL Signs: Delayed First Language Acquisition Affects Organization of the Mental Lexicon

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2015-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…

  10. 20 Years of Technology and Language Assessment in "Language Learning & Technology"

    Science.gov (United States)

    Chapelle, Carol A.; Voss, Erik

    2016-01-01

    This review article provides an analysis of the research from the last two decades on the theme of technology and second language assessment. Based on an examination of the assessment scholarship published in "Language Learning & Technology" since its launch in 1997, we analyzed the review articles, research articles, book reviews,…

  11. An Investigation into the Relationship of Foreign Language Learning Motivation and Sign Language Use among Deaf and Hard of Hearing Hungarians

    Science.gov (United States)

    Kontra, Edit H.; Csizer, Kata

    2013-01-01

    The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…

  12. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  13. Cognitive status, lexical learning and memory in deaf adults using sign language

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2013-05-01

    Full Text Available Background and Aim : Learning and memory are two high level cognitive performances in human that hearing loss influences them. In our study, mini-mental state examination (MMSE and Ray auditory-verbal learning test (RAVLT was conducted to study cognitive stat us and lexical learning and memory in deaf adults using sign language. Methods: This cross-sectional comparative study was conducted on 30 available congenitally deaf adults using sign language in Persian and 46 normal adults aged 19 to 27 years for both sexes, with a minimum of diploma level of education. After mini-mental state examination, Rey auditory-verbal learning test was run through computers to evaluate lexical learning and memory with visual presentation. Results: Mean scores of mini-mental state examination and Rey auditory-verbal learning test in congenitally deaf adults were significantly lower than normal individuals in all scores (p=0.018 except in the two parts of the Rey test. Significant correlation was found between results of two tests just in the normal group (p=0.043. Gender had no effect on test results. Conclusion: Cognitive status and lexical memory and learning in congenitally deaf individuals is weaker than in normal subjects. It seems that using sign language as the main way of communication in deaf people causes poor lexical memory and learning.

  14. Practical low-cost visual communication using binary images for deaf sign language.

    Science.gov (United States)

    Manoranjan, M D; Robinson, J A

    2000-03-01

    Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.

  15. Teaching sign language in gaucho schools for deaf people: a study of curricula

    Directory of Open Access Journals (Sweden)

    Carolina Hessel Silveira

    2013-06-01

    Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.

  16. Attention-getting skills of deaf children using American Sign Language in a preschool classroom.

    Science.gov (United States)

    Lieberman, Amy M

    2015-07-01

    Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.

  17. Sign Language Interpreting in Theatre: Using the Human Body to Create Pictures of the Human Soul

    Directory of Open Access Journals (Sweden)

    Michael Richardson

    2017-06-01

    Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium

  18. The development and psychometric properties of the American sign language proficiency assessment (ASL-PA).

    Science.gov (United States)

    Maller, S; Singleton, J; Supalla, S; Wix, T

    1999-01-01

    We describe the procedures for constructing an instrument designed to evaluate children's proficiency in American Sign Language (ASL). The American Sign Language Proficiency Assessment (ASL-PA) is a much-needed tool that potentially could be used by researchers, language specialists, and qualified school personnel. A half-hour ASL sample is collected on video from a target child (between ages 6 and 12) across three separate discourse settings and is later analyzed and scored by an assessor who is highly proficient in ASL. After the child's language sample is scored, he or she can be assigned an ASL proficiency rating of Level 1, 2, or 3. At this phase in its development, substantial evidence of reliability and validity has been obtained for the ASL-PA using a sample of 80 profoundly deaf children (ages 6-12) of varying ASL skill levels. The article first explains the item development and administration of the ASL-PA instrument, then describes the empirical item analysis, standard setting procedures, and evidence of reliability and validity. The ASL-PA is a promising instrument for assessing elementary school-age children's ASL proficiency. Plans for further development are also discussed.

  19. The verbal-visual discourse in Brazilian Sign Language – Libras

    Directory of Open Access Journals (Sweden)

    Tanya Felipe

    2013-11-01

    Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.

  20. Cerebral organization of oral and signed language responses: case study evidence from amytal and cortical stimulation studies.

    Science.gov (United States)

    Mateer, C A; Rapport, R L; Kettrick, C

    1984-01-01

    A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.

  1. Emerging Technologies for Autonomous Language Learning

    Directory of Open Access Journals (Sweden)

    Mark Warschauer

    2011-09-01

    Full Text Available Drawing on a lengthier review completed for the US National Institute for Literacy, this paper examines emerging technologies that are applicable to self-access and autonomous learning in the areas of listening and speaking, collaborative writing, reading and language structure, and online interaction. Digital media reviewed include podcasts, blogs, wikis, online writing sites, text-scaffolding software, concordancers, multiuser virtual environments, multiplayer games, and chatbots. For each of these technologies, we summarize recent research and discuss possible uses for autonomous language learning.

  2. ANFIS Based Methodology for Sign Language Recognition and Translating to Number in Kannada Language

    Directory of Open Access Journals (Sweden)

    Ramesh Mahadev kagalkar

    2017-03-01

    Full Text Available In the world of signing and gestures, lots of analysis work has been done over the past three decades. This has led to a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a restricted vocabulary. In gift state of affairs, human machine interactive systems facilitate communication between the deaf, and hearing impaired in universe things. So as to boost the accuracy of recognition, several researchers have deployed strategies like HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The most purpose of this paper is to investigate these strategies and to effectively compare them, which can alter the reader to succeed in associate in nursing optimum resolution. This creates each, challenges and opportunities for signing recognition connected analysis. Normal 0 false false false DE JA X-NONE

  3. Mapping Language to the World: The Role of Iconicity in the Sign Language Input

    Science.gov (United States)

    Perniss, Pamela; Lu, Jenny C.; Morgan, Gary; Vigliocco, Gabriella

    2018-01-01

    Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on "iconicity" in language, that is, resemblance relationships between form and…

  4. Students who are deaf and hard of hearing and use sign language: considerations and strategies for developing spoken language and literacy skills.

    Science.gov (United States)

    Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane

    2012-11-01

    There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. [Instruments in Brazilian Sign Language for assessing the quality of life of the deaf population].

    Science.gov (United States)

    Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida

    2013-06-01

    To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.

  6. INFORMATION TECHNOLOGIES IN MODERN LANGUAGE EDUCATION

    Directory of Open Access Journals (Sweden)

    N. Y. Gutareva

    2014-09-01

    Full Text Available This article develops the sources of occurrence and the purposes of application of information technologies in teaching of foreign languages from the point of view of linguistics, methods of teaching foreign languages and psychology. The main features of them have been determined in works of native and foreign scientists from the point of view of the basic didactic principles and new standards of selection for working with computer programs are pointed out. In work the author focuses the main attention to modern technologies that in language education in teaching are especially important and demanded as answer the purpose and problems of teaching in foreign languages are equitable to interests of students but they should be safe.Purpose:  to determine advantages of using interactive means in teaching foreign languages.Methodology: studying and analysis of psychological, pedagogical and methodological literature on the theme of investigation.Results: the analysis of the purpose and kinds of interactive means has shown importance of its application in practice.Practical implications:  it is possible for us to use the results of this work in courses of theory of methodology of teaching foreign languages.

  7. The Neural Correlates of Highly Iconic Structures and Topographic Discourse in French Sign Language as Observed in Six Hearing Native Signers

    Science.gov (United States)

    Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.

    2010-01-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…

  8. The corpus-driven revolution in Polish Sign Language: the interview with Dr. Paweł Rutkowski

    Directory of Open Access Journals (Sweden)

    Iztok Kosem

    2018-02-01

    Full Text Available Dr. Paweł Rutkowski is head of the Section for Sign Linguistics at the University of Warsaw. He is a general linguist and a specialist in the field of syntax of natural languages, carrying out research on Polish Sign Language (polski język migowy — PJM. He has been awarded a number of prizes, grants and scholarships by such institutions as the Foundation for Polish Science, Polish Ministry of Science and Higher Education, National Science Centre, Poland, Polish–U.S. Fulbright Commission, Kosciuszko Foundation and DAAD. Dr. Rutkowski leads the team developing the Corpus of Polish Sign Language and the Corpus-based Dictionary of Polish Sign Language, the first dictionary of this language prepared in compliance with modern lexicographical standards. The dictionary is an open-access publication, available freely at the following address: http://www.slownikpjm.uw.edu.pl/en/. This interview took place at eLex 2017, a biennial conference on electronic lexicography, where Dr. Rutkowski was awarded the Adam Kilgarriff Prize and gave a keynote address entitled Sign language as a challenge to electronic lexicography: The Corpus-based Dictionary of Polish Sign Language and beyond. The interview was conducted by Dr. Victoria Nyst from Leiden University, Faculty of Humanities, and Dr. Iztok Kosem from the University of Ljubljana, Faculty of Arts.

  9. Suspending the next turn as a form of repair initiation: evidence from Argentine Sign Language

    Directory of Open Access Journals (Sweden)

    Elizabeth eManrique

    2015-09-01

    Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.

  10. First language acquisition differs from second language acquisition in prelingually deaf signers: evidence from sensitivity to grammaticality judgement in British Sign Language.

    Science.gov (United States)

    Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni

    2012-07-01

    Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Redesigning Technology Integration into World Language Education

    Science.gov (United States)

    Rodríguez, Julio C.

    2018-01-01

    This article describes how a multi-institutional, proficiency-based program engages stakeholders in design thinking to discover and explore solutions to perennial problems in technology integration into world language education (WLE). Examples of replicable activities illustrate the strategies used to fuel innovation efforts, including fostering…

  12. Journal of Language, Technology & Entrepreneurship in Africa ...

    African Journals Online (AJOL)

    Focus and Scope. The journal is cross-disciplinary and therefore it publishes articles from a wide-range of topics including language, technology, entrepreneurship, finance and communication. It is meant to promote dialogue across disciplines by emphasizing the interconnectedness of knowledge. It is ideal for scholars ...

  13. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    Science.gov (United States)

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  14. Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming.

    Science.gov (United States)

    Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara

    2010-03-01

    We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.

  15. Implicit co-activation of American Sign Language in deaf readers: An ERP study.

    Science.gov (United States)

    Meade, Gabriela; Midgley, Katherine J; Sevcikova Sehyr, Zed; Holcomb, Phillip J; Emmorey, Karen

    2017-07-01

    In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Communication Access for Deaf People in Healthcare Settings: Understanding the Work of American Sign Language Interpreters.

    Science.gov (United States)

    Olson, Andrea M; Swabey, Laurie

    Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.

  17. Functional connectivity in task-negative network of the Deaf: effects of sign language experience

    Directory of Open Access Journals (Sweden)

    Evie Malaia

    2014-06-01

    Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

  18. Constructed Action, the Clause and the Nature of Syntax in Finnish Sign Language

    Directory of Open Access Journals (Sweden)

    Jantunen Tommi

    2017-01-01

    Full Text Available This paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL. Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.

  19. Thinking through ethics : the processes of ethical decision-making by novice and expert American sign language interpreters

    OpenAIRE

    Mendoza, Mary Elizabeth

    2010-01-01

    In the course of their work, sign language interpreters are faced with ethical dilemmas that require prioritizing competing moral beliefs and views on professional practice. There are several decision-making models, however, little research has been done on how sign language interpreters learn to identify and make ethical decisions. Through surveys and interviews on ethical decision-making, this study investigates how expert and novice interpreters discuss their ethical decision-making proces...

  20. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    Science.gov (United States)

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  1. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language.

    Science.gov (United States)

    Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory

    2008-12-01

    Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.

  2. Depictions and minifiction: a reflection on translation of micro-story as didactics of sign language interpreters training in colombia.

    Directory of Open Access Journals (Sweden)

    Alex Giovanny Barreto

    2015-10-01

    Full Text Available The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distance of Colombia-UNAD. The didactic proposal focus on the model of the efforts (Gile, specifically in the production and listen efforts. A criticism about translating competence is presented. Minifiction is literary genre with multiple semiotic and philosophical translation possibilities. These literary texts have elements with great potential to render on visual, gestural and spatial depictions of Colombian sign language which is profitable to interpreter training and education. Through El Dinosaurio sign language translation, we concludes with an outline and reflections on the pedagogical and didactic potential of minifiction and depictions in the design of training activities in sign language interpreters.

  3. American Sign Language Alphabet Recognition Using a Neuromorphic Sensor and an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Miguel Rivera-Acosta

    2017-09-01

    Full Text Available This paper reports the design and analysis of an American Sign Language (ASL alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.

  4. Where to Look for American Sign Language (ASL) Sublexical Structure in the Visual World: Reply to Salverda (2016)

    Science.gov (United States)

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2016-01-01

    In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…

  5. Usability of American Sign Language Videos for Presenting Mathematics Assessment Content.

    Science.gov (United States)

    Hansen, Eric G; Loew, Ruth C; Laitusis, Cara C; Kushalnagar, Poorna; Pagliaro, Claudia M; Kurz, Christopher

    2018-04-12

    There is considerable interest in determining whether high-quality American Sign Language videos can be used as an accommodation in tests of mathematics at both K-12 and postsecondary levels; and in learning more about the usability (e.g., comprehensibility) of ASL videos with two different types of signers - avatar (animated figure) and human. The researchers describe the results of administering each of nine pre-college mathematics items in both avatar and human versions to each of 31 Deaf participants with high school and post-high school backgrounds. This study differed from earlier studies by obliging the participants to rely on the ASL videos to answer the items. While participants preferred the human version over the avatar version (apparently due largely to the better expressiveness and fluency of the human), there was no discernible relationship between mathematics performance and signed version.

  6. EVALUATIVE LANGUAGE IN SPOKEN AND SIGNED STORIES TOLD BY A DEAF CHILD WITH A COCHLEAR IMPLANT: WORDS, SIGNS OR PARALINGUISTIC EXPRESSIONS?

    Directory of Open Access Journals (Sweden)

    Ritva Takkinen

    2011-01-01

    Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.

  7. BLENDED TECHNOLOGY IN LEARNING FOREIGN LANGUAGES

    Directory of Open Access Journals (Sweden)

    Natalia Alexandrovna Kameneva

    2013-11-01

    Full Text Available This article analyzes the use of information technologies in the context of a blended technology approach to learning foreign languages in higher education institutions. Distance learning tools can be categorized as being synchronous (webinar, video conferencing, case-technology, chat, ICQ, Skype, interactive whiteboards or asynchronous (blogs, forums, Twitter, video and audio podcasts, wikis, on-line testing. Sociological and psychological aspects of their application in the educational process are also considered.DOI: http://dx.doi.org/10.12731/2218-7405-2013-8-41

  8. Visualizing Patient Journals by Combining Vital Signs Monitoring and Natural Language Processing

    DEFF Research Database (Denmark)

    Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten

    2016-01-01

    This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....

  9. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    Science.gov (United States)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  10. COMPARATIVE ANALYSIS OF THE STRUCTURE OF THE AMERICAN AND MACEDONIAN SIGN LANGUAGE

    OpenAIRE

    Aleksandra KAROVSKA RISTOVSKA

    2014-01-01

    Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural ...

  11. Using American sign language interpreters to facilitate research among deaf adults: lessons learned.

    Science.gov (United States)

    Sheppard, Kate

    2011-04-01

    Health care providers commonly discuss depressive symptoms with clients, enabling earlier intervention. Such discussions rarely occur between providers and Deaf clients. Most culturally Deaf adults experience early-onset hearing loss, self-identify as part of a unique culture, and communicate in the visual language of American Sign Language (ASL). Communication barriers abound, and depression screening instruments may be unreliable. To train and use ASL interpreters for a qualitative study describing depressive symptoms among Deaf adults. Training included research versus community interpreting. During data collection, interpreters translated to and from voiced English and ASL. Training eliminated potential problems during data collection. Unexpected issues included participants asking for "my interpreter" and worrying about confidentiality or friendship in a small community. Lessons learned included the value of careful training of interpreters prior to initiating data collection, including resolution of possible role conflicts and ensuring conceptual equivalence in real-time interpreting.

  12. Functional changes in people with different hearing status and experiences of using Chinese sign language: an fMRI study.

    Science.gov (United States)

    Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji

    2014-01-01

    The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc

  13. Indigenous Language Revitalization, Promotion, and Education: Function of Digital Technology

    Science.gov (United States)

    Galla, Candace Kaleimamoowahinekapu

    2016-01-01

    Within the last two decades, there has been increased interest in how technology supports Indigenous language revitalization and reclamation efforts. This paper considers the effect technology has on Indigenous language learning and teaching, while conceptualizing how language educators, speakers, learners, and technology users holistically…

  14. Recognition of American Sign Language (ASL) Classifiers in a Planetarium Using a Head-Mounted Display

    Science.gov (United States)

    Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan

    2015-01-01

    A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.

  15. Computer-Assisted Language Learning : proceedings of the seventh Twente Workshop on Language Technology

    NARCIS (Netherlands)

    Appelo, L.; de Jong, Franciska M.G.

    1994-01-01

    TWLT is an acronym of Twente Workshop(s) on Language Technology. These workshops on natural language theory and technology are organised bij Project Parlevink (sometimes with the help of others) a language theory and technology project conducted at the Department of Computer Science of the

  16. Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production.

    Science.gov (United States)

    Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad

    2015-12-01

    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  17. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  18. Naturalizing language: human appraisal and (quasi) technology

    DEFF Research Database (Denmark)

    Cowley, Stephen

    2013-01-01

    Using contemporary science, the paper builds on Wittgenstein’s views of human language. Rather than ascribing reality to inscription-like entities, it links embodiment with distributed cognition. The verbal or (quasi) technological aspect of language is traced to not action, but human specific...... interactivity. This species-specific form of sense-making sustains, among other things, using texts, making/construing phonetic gestures and thinking. Human action is thus grounded in appraisals or sense-saturated coordination. To illustrate interactivity at work, the paper focuses on a case study. Over 11 s......, a crime scene investigator infers that she is probably dealing with an inside job: she uses not words, but intelligent gaze. This connects professional expertise to circumstances and the feeling of thinking. It is suggested that, as for other species, human appraisal is based in synergies. However, since...

  19. Reading books with young deaf children: strategies for mediating between American Sign Language and English.

    Science.gov (United States)

    Berke, Michele

    2013-01-01

    Research on shared reading has shown positive results on children's literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children's visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children's literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques

  20. Routes to short term memory indexing: Lessons from deaf native users of American Sign Language

    Science.gov (United States)

    Hirshorn, Elizabeth A.; Fernandez, Nina M.; Bavelier, Daphne

    2012-01-01

    Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se. PMID:22871205

  1. Engaging the Deaf American Sign Language Community: Lessons From a Community-Based Participatory Research Center

    Science.gov (United States)

    McKee, Michael; Thew, Denise; Starr, Matthew; Kushalnagar, Poorna; Reid, John T.; Graybill, Patrick; Velasquez, Julia; Pearson, Thomas

    2013-01-01

    Background Numerous publications demonstrate the importance of community-based participatory research (CBPR) in community health research, but few target the Deaf community. The Deaf community is understudied and underrepresented in health research despite suspected health disparities and communication barriers. Objectives The goal of this paper is to share the lessons learned from the implementation of CBPR in an understudied community of Deaf American Sign Language (ASL) users in the greater Rochester, New York, area. Methods We review the process of CBPR in a Deaf ASL community and identify the lessons learned. Results Key CBPR lessons include the importance of engaging and educating the community about research, ensuring that research benefits the community, using peer-based recruitment strategies, and sustaining community partnerships. These lessons informed subsequent research activities. Conclusions This report focuses on the use of CBPR principles in a Deaf ASL population; lessons learned can be applied to research with other challenging-to-reach populations. PMID:22982845

  2. Negotiating legitimacy in American Sign Language interpreting education: Uneasy belonging in a community of practice

    Directory of Open Access Journals (Sweden)

    Michele Friedner

    2018-02-01

    Full Text Available This article ethnographically explores how American Sign Language-English interpreting students negotiate and foreground different kinds of relationships to claim legitimacy in relation to deaf people and the deaf community. As the field of interpreting is undergoing shifts from community interpreting to professionalization, interpreting students endeavor to legitimize their involvement in the field. Students create distinction between themselves and other students through relational work that involves positive and negative interpretation of kinship terms. In analyzing interpreting students' gate-keeping practices, this article explores the categories and definitions used by interpreting students and argues that there is category trouble that occurs. Identity and kinship categories are not nuanced or critically interrogated, resulting in deaf people and interpreters being represented in static ways.

  3. Cognitive Metaphors Used in Colombian Sign Language in Five Autobiographical Stories and the Image Schemata They Are Related to

    Directory of Open Access Journals (Sweden)

    Yenny Rodríguez Hernández

    2016-06-01

    Full Text Available This paper reports the results of an exploratory study whose purpose was to identify and characterize the metaphors in a sample of five videos in Colombian sign language (in Spanish, lsc.The data were analyzed using theoretical contributions from Lakoff and Johnson’s theories (1980 about cognitive metaphors and image schemata, and from Wilcox (2000 and Taub (2001 on double mapping in sign language. The results show a frequency analysis of image schemata and the metaphors present into metaphorical expressions in five autobiographical narratives by five congenital deaf adults. The study concludes that sign language has cognitive metaphors that let deaf people map from a concrete domain to an abstract one in order to build concepts.

  4. A qualitative exploration of trial-related terminology in a study involving Deaf British Sign Language users.

    Science.gov (United States)

    Young, Alys; Oram, Rosemary; Dodds, Claire; Nassimi-Green, Catherine; Belk, Rachel; Rogers, Katherine; Davies, Linda; Lovell, Karina

    2016-04-27

    Internationally, few clinical trials have involved Deaf people who use a signed language and none have involved BSL (British Sign Language) users. Appropriate terminology in BSL for key concepts in clinical trials that are relevant to recruitment and participant information materials, to support informed consent, do not exist. Barriers to conceptual understanding of trial participation and sources of misunderstanding relevant to the Deaf community are undocumented. A qualitative, community participatory exploration of trial terminology including conceptual understanding of 'randomisation', 'trial', 'informed choice' and 'consent' was facilitated in BSL involving 19 participants in five focus groups. Data were video-recorded and analysed in source language (BSL) using a phenomenological approach. Six necessary conditions for developing trial information to support comprehension were identified. These included: developing appropriate expressions and terminology from a community basis, rather than testing out previously derived translations from a different language; paying attention to language-specific features which support best means of expression (in the case of BSL expectations of specificity, verb directionality, handshape); bilingual influences on comprehension; deliberate orientation of information to avoid misunderstanding not just to promote accessibility; sensitivity to barriers to discussion about intelligibility of information that are cultural and social in origin, rather than linguistic; the importance of using contemporary language-in-use, rather than jargon-free or plain language, to support meaningful understanding. The study reinforces the ethical imperative to ensure trial participants who are Deaf are provided with optimum resources to understand the implications of participation and to make an informed choice. Results are relevant to the development of trial information in other signed languages as well as in spoken/written languages when

  5. The Cognitive Neuroscience of Sign Language: Engaging Undergraduate Students' Critical Thinking Skills Using the Primary Literature.

    Science.gov (United States)

    Stevens, Courtney

    2015-01-01

    This article presents a modular activity on the neurobiology of sign language that engages undergraduate students in reading and analyzing the primary functional magnetic resonance imaging (fMRI) literature. Drawing on a seed empirical article and subsequently published critique and rebuttal, students are introduced to a scientific debate concerning the functional significance of right-hemisphere recruitment observed in some fMRI studies of sign language processing. The activity requires minimal background knowledge and is not designed to provide students with a specific conclusion regarding the debate. Instead, the activity and set of articles allow students to consider key issues in experimental design and analysis of the primary literature, including critical thinking regarding the cognitive subtractions used in blocked-design fMRI studies, as well as possible confounds in comparing results across different experimental tasks. By presenting articles representing different perspectives, each cogently argued by leading scientists, the readings and activity also model the type of debate and dialogue critical to science, but often invisible to undergraduate science students. Student self-report data indicate that undergraduates find the readings interesting and that the activity enhances their ability to read and interpret primary fMRI articles, including evaluating research design and considering alternate explanations of study results. As a stand-alone activity completed primarily in one 60-minute class block, the activity can be easily incorporated into existing courses, providing students with an introduction both to the analysis of empirical fMRI articles and to the role of debate and critique in the field of neuroscience.

  6. Towards the Development of a Mexican Speech-to-Sign-Language Translator for the Deaf Community

    Directory of Open Access Journals (Sweden)

    Santiago-Omar Caballero-Morales

    2012-03-01

    Full Text Available Una parte significativa de la población mexicana es sorda. Esta discapacidad restringe sus habilidades de interacción social con personas que no tienen dicha discapacidad y viceversa. En este artículo presentamos nuestros avances hacia el desarrollo de un traductor Voz-a-Lenguaje-de-Señas del español mexicano para asistir a personas sin discapacidad a interactuarcon personas sordas. La metodología de diseño propuesta considera limitados recursos para(1 el desarrollo del Reconocedor Automático del Habla (RAH mexicano, el cual es el módulo principal del traductor, y (2 el vocabulario del Lenguaje de Señas Mexicano (LSM disponible para representar las oraciones reconocidas. La traducción Voz-a-Lenguaje-de-Señas fue lograda con un nivel de precisión mayor al 97% para usuarios de prueba diferentes de aquellos seleccionados para el entrenamiento del RAH.A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1 the development of the Mexican Automatic Speech Recogniser (ASRsystem, which is the main module in the translator, and (2 the Mexican Sign Language(MSL vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.

  7. Early Sign Language Experience Goes along with an Increased Cross-Modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users

    Science.gov (United States)

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-01-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…

  8. Non parametric, self organizing, scalable modeling of spatiotemporal inputs: the sign language paradigm.

    Science.gov (United States)

    Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S

    2012-12-01

    Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: a neurophysiological study.

    Science.gov (United States)

    Colin, C; Zuinen, T; Bayard, C; Leybaert, J

    2013-06-01

    Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  10. Concrete Poetry as Sign of Technological Changes in Society

    DEFF Research Database (Denmark)

    Ørum, Tania

    2016-01-01

    This case deals with the large cultural perspectives and the technological imagination evident in the Swedish critic Torsten Ekbom's review of Danish concrete poetry......This case deals with the large cultural perspectives and the technological imagination evident in the Swedish critic Torsten Ekbom's review of Danish concrete poetry...

  11. Sensing technology for damage assessment of sign supports and cantilever poles : final report, August 31, 2010.

    Science.gov (United States)

    2010-08-31

    This report presents the results of research activities conducted under Contract No. 519691-PIT 008 on Sensing Technology for : Damage Assessment of Sign Supports and Cantilever Poles between the University of Pittsburgh and the Pennsylvania De...

  12. Preservice Teacher and Interpreter American Sign Language Abilities: Self-Evaluations and Evaluations of Deaf Students' Narrative Renditions

    Science.gov (United States)

    Beal-Alvarez, Jennifer S.; Scheetz, Nanci A.

    2015-01-01

    In deaf education, the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and…

  13. The Effectiveness of the Game-Based Learning System for the Improvement of American Sign Language Using Kinect

    Science.gov (United States)

    Kamnardsiri, Teerawat; Hongsit, Ler-on; Khuwuthyakorn, Pattaraporn; Wongta, Noppon

    2017-01-01

    This paper investigated students' achievement for learning American Sign Language (ASL), using two different methods. There were two groups of samples. The first experimental group (Group A) was the game-based learning for ASL, using Kinect. The second control learning group (Group B) was the traditional face-to-face learning method, generally…

  14. Schooling in American Sign Language: A Paradigm Shift from a Deficit Model to a Bilingual Model in Deaf Education

    Science.gov (United States)

    Humphries, Tom

    2013-01-01

    Deaf people have long held the belief that American Sign Language (ASL) plays a significant role in the academic development of deaf children. Despite this, the education of deaf children has historically been exclusive of ASL and constructed as an English-only, deficit-based pedagogy. Newer research, however, finds a strong correlation between…

  15. Authentic Language Input Through Audiovisual Technology and Second Language Acquisition

    Directory of Open Access Journals (Sweden)

    Taher Bahrani

    2014-09-01

    Full Text Available Second language acquisition cannot take place without having exposure to language input. With regard to this, the present research aimed at providing empirical evidence about the low and the upper-intermediate language learners’ preferred type of audiovisual programs and language proficiency development outside the classroom. To this end, 60 language learners (30 low level and 30 upper-intermediate level were asked to have exposure to their preferred types of audiovisual program(s outside the classroom and keep a diary of the amount and the type of exposure. The obtained data indicated that the low-level participants preferred cartoons and the upper-intermediate participants preferred news more. To find out which language proficiency level could improve its language proficiency significantly, a post-test was administered. The results indicated that only the upper-intermediate language learners gained significant improvement. Based on the findings, the quality of the language input should be given priority over the amount of exposure.

  16. Providing Formative Feedback: Language Technologies for Lifelong Learning CONSPECT tool

    NARCIS (Netherlands)

    Berlanga, Adriana

    2011-01-01

    Berlanga, A. J. (2011). Providing Formative Feedback: Language Technologies for Lifelong Learning CONSPECT tool. Presentation given at the Onderwijslunch, University of Maastricht. January, 18, 2011, Maastricht, The Netherlands.

  17. Limits of visual communication: the effect of signal-to-noise ratio on the intelligibility of American Sign Language.

    Science.gov (United States)

    Pavel, M; Sperling, G; Riedl, T; Vanderbeek, A

    1987-12-01

    To determine the limits of human observers' ability to identify visually presented American Sign Language (ASL), the contrast s and the amount of additive noise n in dynamic ASL images were varied independently. Contrast was tested over a 4:1 range; the rms signal-to-noise ratios (s/n) investigated were s/n = 1/4, 1/2, 1, and infinity (which is used to designate the original, uncontaminated images). Fourteen deaf subjects were tested with an intelligibility test composed of 85 isolated ASL signs, each 2-3 sec in length. For these ASL signs (64 x 96 pixels, 30 frames/sec), subjects' performance asymptotes between s/n = 0.5 and 1.0; further increases in s/n do not improve intelligibility. Intelligibility was found to depend only on s/n and not on contrast. A formulation in terms of logistic functions was proposed to derive intelligibility of ASL signs from s/n, sign familiarity, and sign difficulty. Familiarity (ignorance) is represented by additive signal-correlated noise; it represents the likelihood of a subject's knowing a particular ASL sign, and it adds to s/n. Difficulty is represented by a multiplicative difficulty coefficient; it represents the perceptual vulnerability of an ASL sign to noise and it adds to log(s/n).

  18. Post-Secondary Foreign Language Teachers' Belief Systems about Language Teaching/Learning and Technology/Teaching with Technology

    Science.gov (United States)

    Oda, Kazue

    2011-01-01

    While many studies have demonstrated the advantages of using computer technology in foreign language classrooms, many post-secondary foreign language (FL) teachers still remain reluctant to use technology in instruction. Even when teachers do use technology, critiques have indicated that it is oftentimes used merely to replicate traditional…

  19. Evidence of an association between sign language phonological awareness and word reading in deaf and hard-of-hearing children.

    Science.gov (United States)

    Holmer, Emil; Heimann, Mikael; Rudner, Mary

    2016-01-01

    Children with good phonological awareness (PA) are often good word readers. Here, we asked whether Swedish deaf and hard-of-hearing (DHH) children who are more aware of the phonology of Swedish Sign Language, a language with no orthography, are better at reading words in Swedish. We developed the Cross-modal Phonological Awareness Test (C-PhAT) that can be used to assess PA in both Swedish Sign Language (C-PhAT-SSL) and Swedish (C-PhAT-Swed), and investigated how C-PhAT performance was related to word reading as well as linguistic and cognitive skills. We validated C-PhAT-Swed and administered C-PhAT-Swed and C-PhAT-SSL to DHH children who attended Swedish deaf schools with a bilingual curriculum and were at an early stage of reading. C-PhAT-SSL correlated significantly with word reading for DHH children. They performed poorly on C-PhAT-Swed and their scores did not correlate significantly either with C-PhAT-SSL or word reading, although they did correlate significantly with cognitive measures. These results provide preliminary evidence that DHH children with good sign language PA are better at reading words and show that measures of spoken language PA in DHH children may be confounded by individual differences in cognitive skills. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Informal Language Learning Setting: Technology or Social Interaction?

    Science.gov (United States)

    Bahrani, Taher; Sim, Tam Shu

    2012-01-01

    Based on the informal language learning theory, language learning can occur outside the classroom setting unconsciously and incidentally through interaction with the native speakers or exposure to authentic language input through technology. However, an EFL context lacks the social interaction which naturally occurs in an ESL context. To explore…

  1. New Technologies, Same Ideologies: Learning from Language Revitalization Online

    Science.gov (United States)

    Wagner, Irina

    2017-01-01

    Ease of access, production, and distribution have made online technologies popular in language revitalization. By incorporating multimodal resources, audio, video, and games, they attract indigenous communities undergoing language shift in hopes of its reversal. However, by merely expanding language revitalization to the web, many language…

  2. The Impact of Electronic Communication Technology on Written Language

    Science.gov (United States)

    Hamzah, Mohd. Sahandri Gani B.; Ghorbani, Mohd. Reza; Abdullah, Saifuddin Kumar B.

    2009-01-01

    Communication technology is changing things. Language is no exception. Some language researchers argue that language is deteriorating due to increased use in electronic communication. The present paper investigated 100 randomly selected electronic mails (e-mails) and 50 short messaging system (SMS) messages of a representative sample of…

  3. Language and technology literacy barriers to accessing government services

    CSIR Research Space (South Africa)

    Barnard, E

    2003-01-01

    Full Text Available of field experiments are done to gain an improved understanding of the extent to which citizens’ exposure to technology and home language affect their ability to access electronic services. These experiments will influence technology development...

  4. A Critical Appraisal of Foreign Language Research in Content and Language Integrated Learning, Young Language Learners, and Technology-Enhanced Language Learning Published in Spain (2003-2012)

    Science.gov (United States)

    Dooly, Melinda; Masats, Dolors

    2015-01-01

    This state-of-the-art review provides a critical overview of research publications in Spain in the last ten years in three areas of teaching and learning foreign languages (especially English): context and language integrated learning (CLIL), young language learners (YLL), and technology-enhanced language learning (TELL). These three domains have…

  5. Hausa Language in Information and Communication Technology ...

    African Journals Online (AJOL)

    Basically the main medium of expressing information and communication is through language. Human beings are generally endowed with the most effective means of information and communication i.e. language. The popular assumption is that Language is simply communication with words especially the human use of ...

  6. TECHNOLOGICAL APPROACH TO TEACHING FOREIGN LANGUAGES IN TECHNICAL UNIVERSITIES

    Directory of Open Access Journals (Sweden)

    Mariia Kuts

    2016-11-01

    Full Text Available Modern learning foreign languages is based on a humanistic paradigm. The realization of the possibility of such activity researches consider in implementation of technological approach in educational process. The scientists connect the optimal and qualitative realization of this activity with the implementation of education technology into learning process. Modern studies are focused on questions of implementation of technological approach into teaching foreign languages. It is thought to allow to achieve guaranteed minimal level of learning results. At the same time there are some incompletely studied aspects such as content of pedagogical technologies, their conceptual and procedural characteristics, approaches to classification. In the article the essence of technological approach is revealed, the communicatively focused technologies of teaching foreign languages in non-linguistic universities are concretized. The interpretation of technological approach is given; characteristics and attributes in teaching foreign languages are selected. It is noticed that technological approach is social and engineering ideology in the sphere of didactics according to which teaching process is considered to be a completely designed process with strictly planned and fixed results (M. Klarin. In the article it is emphasized on feasibility and efficiency of technological approach while teaching foreign languages, the degree of its integration in educational process is defined. The communication-oriented technologies, based on a communicative method of E. Passov, are allocated as the most optimum. It is shown the communication-oriented technologies go beyond the conceptual idea of modelling in teaching process of real foreign-language communication, and their procedural component and contents are founded on certain principles. The most commonly used technologies of teaching foreign languages are classified as technologies of modernization and technologies of

  7. Switching Perspectives: From a Language Teacher to a Designer of Language Learning with New Technologies

    Science.gov (United States)

    Kuure, Leena; Molin-Juustila, Tonja; Keisanen, Tiina; Riekki, Maritta; Iivari, Netta; Kinnula, Marianne

    2016-01-01

    Despite abundant research on educational technology and strategic input in the field, various surveys have shown that (language) teachers do not seem to embrace in their teaching the full potential of information and communication technology available in our everyday life. Language students soon entering the professional field could accelerate the…

  8. Captioning and Indian Sign Language as Accessibility Tools in Universal Design

    Directory of Open Access Journals (Sweden)

    John Mathew Martin Poothullil

    2013-06-01

    Full Text Available Universal Design in Media as a strategy to achieve accessibility in digital television started in Spain in 1997 with the digitalization of satellite platforms (MuTra, 2006. In India, a conscious effort toward a strategy for accessible media format in digital television is yet to be made. Advertising in India is a billion dollar industry (Adam Smith, 2008 and digital television provides a majority of the space for it. This study investigated the effects of advertisement in accessible format, through the use of captioning and Indian sign language (ISL, on hearing and deaf people. “Deaf (capital letter ‘D’ used for culturally Deaf and hearing” viewers watched two short recent advertisements with and without accessibility formats in a randomized order. Their reactions were recorded on a questionnaire developed for the purpose of the study. Eighty-four persons participated in this study of which 42 were deaf persons. Analysis of the data showed that there was difference in the effects of accessible and nonaccessible formats of advertisement on the “Deaf and Hearing” viewers. The study showed that accessible formats increased the comprehension of the message of the advertisement and use of ISL helped deaf persons to understand concepts better. While captioning increased the perception of the hearing persons to correlate with listening and understanding the concept of the advertisement, the deaf persons correlated watching the ISL interpreter with understanding the concept of the advertisement. Placement of the ISL interpreter in the screen and color of the fonts used for captioning were also covered under the study. However, the placement of the ISL interpreter and color of fonts in the screen and their correlation with comprehension of the advertisement by hearing and deaf persons did not show much of significance in the result of the study.

  9. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa.

    Directory of Open Access Journals (Sweden)

    Tryphine Zulu

    Full Text Available The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI service in health-care.The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs.Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013.In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the

  10. The cost and utilisation patterns of a pilot sign language interpreter service for primary health care services in South Africa.

    Science.gov (United States)

    Zulu, Tryphine; Heap, Marion; Sinanovic, Edina

    2017-01-01

    The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population

  11. Localisation - When Language, Culture and Technology Join Forces

    Directory of Open Access Journals (Sweden)

    Jody Byrne

    2009-03-01

    Full Text Available When you switch on your computer and type up a letter, what language do you see? What about when you visit a website or play a computer game? Does your mobile phone speak your language? Chances are that each of these technological marvels of the modern age communicates with you in your own language. For many of us, this is so commonplace and seamless that we hardly give it a moment's thought but behind the scenes there is a whole industry dedicated to making sure that technology bridges the gap between language and culture without you even noticing.

  12. Localisation - When Language, Culture and Technology Join Forces

    Directory of Open Access Journals (Sweden)

    Jody Byrne

    2012-08-01

    Full Text Available When you switch on your computer and type up a letter, what language do you see? What about when you visit a website or play a computer game? Does your mobile phone speak your language? Chances are that each of these technological marvels of the modern age communicates with you in your own language. For many of us, this is so commonplace and seamless that we hardly give it a moment's thought but behind the scenes there is a whole industry dedicated to making sure that technology bridges the gap between language and culture without you even noticing.

  13. Access to New Zealand Sign Language interpreters and quality of life for the deaf: a pilot study.

    Science.gov (United States)

    Henning, Marcus A; Krägeloh, Christian U; Sameshima, Shizue; Shepherd, Daniel; Shepherd, Gregory; Billington, Rex

    2011-01-01

    This paper aims to: (1) explore usage and accessibility of sign language interpreters, (2) appraise the levels of quality of life (QOL) of deaf adults residing in New Zealand, and (3) consider the impact of access to and usage of sign language interpreters on QOL. Sixty-eight deaf adults living in New Zealand participated in this study. Two questionnaires were employed: a 12-item instrument about access and use of New Zealand sign language interpreters and the abbreviated version of the World Health Organization Quality of Life questionnaire (WHOQOL-BREF). The results showed that 39% of this sample felt that they were unable to adequately access interpreting services. Moreover, this group scored significantly lower than a comparable hearing sample on all four WHOQOL-BREF domains. Finally, the findings revealed that access to good quality interpreters were associated with access to health services, transport issues, engagement in leisure activities, gaining more information, mobility and living in a healthy environment. These findings have consequences for policy makers and agencies interested in ensuring that there is an equitable distribution of essential services for all groups within New Zealand which inevitably has an impact on the health of the individual.

  14. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    Science.gov (United States)

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  15. From community training to university training (and vice-versa: new sign language translator and interpreter profile in the brazilian context

    Directory of Open Access Journals (Sweden)

    Vanessa Regina de Oliveira Martins

    2015-12-01

    Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.

  16. From community training to university training (and vice-versa: new sign language translator and interpreter profile in the brazilian context

    Directory of Open Access Journals (Sweden)

    Vanessa Regina de Oliveira Martins

    2015-10-01

    Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.

  17. Early vocabulary development in deaf native signers: a British Sign Language adaptation of the communicative development inventories.

    Science.gov (United States)

    Woolfe, Tyron; Herman, Rosalind; Roy, Penny; Woll, Bencie

    2010-03-01

    There is a dearth of assessments of sign language development in young deaf children. This study gathered age-related scores from a sample of deaf native signing children using an adapted version of the MacArthur-Bates CDI (Fenson et al., 1994). Parental reports on children's receptive and expressive signing were collected longitudinally on 29 deaf native British Sign Language (BSL) users, aged 8-36 months, yielding 146 datasets. A smooth upward growth curve was obtained for early vocabulary development and percentile scores were derived. In the main, receptive scores were in advance of expressive scores. No gender bias was observed. Correlational analysis identified factors associated with vocabulary development, including parental education and mothers' training in BSL. Individual children's profiles showed a range of development and some evidence of a growth spurt. Clinical and research issues relating to the measure are discussed. The study has developed a valid, reliable measure of vocabulary development in BSL. Further research is needed to investigate the relationship between vocabulary acquisition in native and non-native signers.

  18. Language Use along the Urban Street in Senegal: Perspectives from Proprietors of Commercial Signs

    Science.gov (United States)

    Shiohata, Mariko

    2012-01-01

    Senegal adopted French as the country's sole official language at the time of independence in 1960, since when the language has been used in administration and other formal domains. Similarly, French is employed throughout the formal education system as the language of instruction. Since the 1990s, however, government has mounted an ambitious…

  19. Metáforas en Lengua de Señas Chilena Metaphors in Chilean Sign Language

    Directory of Open Access Journals (Sweden)

    Carolina Becerra

    2008-05-01

    Full Text Available Este estudio describe las características del lenguaje metafórico de personas sordas chilenas y su impacto en la comprensión lingüística. La relevancia de esta pregunta radica en la escasez de investigaciones realizadas, particularmente a nivel nacional. Se desarrolló un estudio cualitativo en base a análisis de videos de sujetos sordos en habla espontánea. Se confeccionó una lista de metáforas conceptuales y no conceptuales en Lengua de Señas Chilena. Posteriormente se evaluó su comprensión en un grupo de sujetos sordos, educados con modalidad comunicativa de lengua de señas. Los resultados obtenidos permiten observar la existencia de metáforas propias de la cultura sorda. Ellas serían coherentes con las particulares experiencias de los sujetos sordos y no necesariamente concuerdan con el lenguaje oral.The present study examined the characteristics of Chilean deaf people's metaphoric language and its relevance in linguistic comprehension. This key question is based in the scarcity of studies conducted in Chile. A qualitative study was developed, on the basis of analysis of videos of Chilean deaf people spontaneous sign language. A list of conceptual and no conceptual metaphors in Chilean sign language was developed. The comprehension of these metaphors was evaluated in a group of deaf subjets, educated using sign language communication. The results identify the existence of metaphors of the deaf culture. These methaphors would be coherent with the particular experiences of deaf subjets and do not necessarily agree with spoken language.

  20. Echolalia, Mitigation and Autism: Indicators from Child Characteristics for the Use of Sign Language and Other Augmentative Language Systems.

    Science.gov (United States)

    Bebko, James M.

    1990-01-01

    Review of literature on indicators of the effectiveness of language intervention programs for autistic children showed that mitigation in echolalia was a critical characteristic, as it implied that the prerequisites for language were accessible through speech. Children whose speech ranged from mutism to unmitigated echolalia had a more negative…

  1. Language learning and the technology of international communications

    Science.gov (United States)

    Batley, Edward

    1991-03-01

    The author posits a reciprocal relationship between the recent popularisation of computer-based technology and the democratisation of Central and Eastern Europe. Brief reference is made to their common denominator, language and language change. The advent of the communicative approach to language learning and the new wave of language authenticity arising from it, both enhanced by the technological revolution, have made the defining of acceptability in the classroom and of communication in the process of testing more problematic than ever, although several advantages have also accrued. Advances in technology have generally outstripped our ability to apply their full or characteristic potential. While technology can personalise learning and in this way make learning more efficient, it can also impede motivation. Old methods, drills and routines are tending to be sustained by it. Lack of technology can also widen the gulf between developed, developing and underdeveloped countries of the world. The author proposes international partnerships as a means of preventing an imbalance which could threaten stability. Single language dominance is another threat to international understanding, given the growing awareness of our multilingual and multicultural environment. Enlightened language policies reaching from the individual to beyond the national community are needed, which adopt these aspects of language learning, explain decisions about the state's choice of languages and, at the same time, promote individual choice wherever practicable.

  2. South African human language technology audit

    CSIR Research Space (South Africa)

    Grover, AS

    2011-06-01

    Full Text Available was conducted for the South African HLT landscape, to create a systematic and detailed inventory of the status of the HLT components across the eleven official languages. Based on the Basic Language Resource Kit (BLaRK) framework (Krauwer, 1998), we used various...

  3. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    Science.gov (United States)

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  4. Integrating Technology Tools for Students Struggling with Written Language

    Science.gov (United States)

    Fedora, Pledger

    2015-01-01

    This exploratory study was designed to assess the experience of preservice teachers when integrating written language technology and their likelihood of applying that technology in their future classrooms. Results suggest that after experiencing technology integration, preservice teachers are more likely to use it in their future teaching.

  5. Emerging Technologies in Adult Literacy and Language Education

    Science.gov (United States)

    Warschauer, Mark; Liaw, Meei-Ling

    2010-01-01

    Although information and communication technologies have become an integral part of life in the United States, they have not yet been adequately integrated into adult language and literacy programs. This raises concerns because of the potential value of technology for enhancing learning and because of the vital role of technological proficiency as…

  6. journal of language, technology & entrepreneurship in africa

    African Journals Online (AJOL)

    Unlike Uganda which already had kingdoms and social stratifications, the ..... Kiswahili is taught as a subject all the way from elementary school to the university level. .... of the legal systems as well as the language of media and entertainment.

  7. Advanced Recyclable Media System reg-sign. Innovative technology summary report

    International Nuclear Information System (INIS)

    1998-12-01

    The objective of the Large-Scale Demonstration Project (LSDP) is to select and demonstrate potentially beneficial technologies at the Argonne National Laboratory East's (ANL) Chicago Pile-5 (CP-5) Research Reactor. The purpose of the LSDP is to demonstrate that using innovative and improved deactivation and decommissioning (D and D) technologies from various sources can result in significant benefits, such as decreased cost and increased health and safety, as compared with baseline D and D technologies. This report describes a demonstration of the Advanced Recyclable Media System reg-sign technology which was employed by Surface Technology Systems, Inc. to remove coatings from a concrete floor. This demonstration is part of the CP-5 LSDP sponsored by the US Department of Energy (DOE) Office of Science and Technology Deactivation and Decommissioning Focus Area (DDFA). The Advanced Recyclable Media System reg-sign (ARMS) technology is an open blast technology which uses a soft recyclable media. The patented ARMS Engineered Blast Media consists of a fiber-reinforced polymer matrix which can be manufactured in various grades of abrasiveness. The fiber media can be remade and/or reused up to 20 times and can clean almost any surface (e.g., metal, wood, concrete, lead) and geometry including corners and the inside of air ducts

  8. Sign language teaching and assessment in higher education : Didactic use and effectiveness of the CEFR

    NARCIS (Netherlands)

    Eveline Boers-Visker; Annemiek Hammer; Dr. Beppie van den Bogaerde

    2015-01-01

    Introduction The CEFR offers a framework for language teaching, learning and assessment for L2 learners. Importantly, the CEFR draws on a learner’s communicative language competence rather than linguistic competence (e.g. vocabulary, grammar). As such, the implementation of the CEFR in our four

  9. Natural language processing in psychiatry. Artificial intelligence technology and psychopathology.

    Science.gov (United States)

    Garfield, D A; Rapp, C; Evens, M

    1992-04-01

    The potential benefit of artificial intelligence (AI) technology as a tool of psychiatry has not been well defined. In this essay, the technology of natural language processing and its position with regard to the two main schools of AI is clearly outlined. Past experiments utilizing AI techniques in understanding psychopathology are reviewed. Natural language processing can automate the analysis of transcripts and can be used in modeling theories of language comprehension. In these ways, it can serve as a tool in testing psychological theories of psychopathology and can be used as an effective tool in empirical research on verbal behavior in psychopathology.

  10. The Future of Foreign Language Instructional Technology: BYOD MALL

    Directory of Open Access Journals (Sweden)

    Jack Burston

    2016-05-01

    Full Text Available This paper describes trends in instructional technology that are influencing foreign language teaching today and that can be expected to increasingly do so in the future. Though already an integral part of foreign language instruction, digital technology is bound to play an increasing role in language teaching in the coming years. The greatest stimulus for this will undoubtedly be the accessibility of Mobile-Assisted Language Learning (MALL, made possible through the exploitation of mobile devices owned by students themselves. The ubiquitous ownership of smartphones and tablet computers among adolescents and adults now makes a Bring Your Own Device (BYOD approach a feasible alternative to desktop computer labs. Making this work, however, especially in a financially and technologically restricted environment, presents a number of challenges which are the focus of this paper.

  11. Benefits of augmentative signs in word learning: Evidence from children who are deaf/hard of hearing and children with specific language impairment.

    Science.gov (United States)

    van Berkel-van Hoof, Lian; Hermans, Daan; Knoors, Harry; Verhoeven, Ludo

    2016-12-01

    Augmentative signs may facilitate word learning in children with vocabulary difficulties, for example, children who are Deaf/Hard of Hearing (DHH) and children with Specific Language Impairment (SLI). Despite the fact that augmentative signs may aid second language learning in populations with a typical language development, empirical evidence in favor of this claim is lacking. We aim to investigate whether augmentative signs facilitate word learning for DHH children, children with SLI, and typically developing (TD) children. Whereas previous studies taught children new labels for familiar objects, the present study taught new labels for new objects. In our word learning experiment children were presented with pictures of imaginary creatures and pseudo words. Half of the words were accompanied by an augmentative pseudo sign. The children were tested for their receptive word knowledge. The DHH children benefitted significantly from augmentative signs, but the children with SLI and TD age-matched peers did not score significantly different on words from either the sign or no-sign condition. These results suggest that using Sign-Supported speech in classrooms of bimodal bilingual DHH children may support their spoken language development. The difference between earlier research findings and the present results may be caused by a difference in methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Predicate Structures, Gesture, and Simultaneity in the Representation of Action in British Sign Language: Evidence From Deaf Children and Adults

    Science.gov (United States)

    Cormier, Kearsy

    2013-01-01

    British Sign Language (BSL) signers use a variety of structures, such as constructed action (CA), depicting constructions (DCs), or lexical verbs, to represent action and other verbal meanings. This study examines the use of these verbal predicate structures and their gestural counterparts, both separately and simultaneously, in narratives by deaf children with various levels of exposure to BSL (ages 5;1 to 7;5) and deaf adult native BSL signers. Results reveal that all groups used the same types of predicative structures, including children with minimal BSL exposure. However, adults used CA, DCs, and/or lexical signs simultaneously more frequently than children. These results suggest that simultaneous use of CA with lexical and depicting predicates is more complex than the use of these predicate structures alone and thus may take deaf children more time to master. PMID:23670881

  13. Gender Perspectives in Language | Nelson | Science, Technology ...

    African Journals Online (AJOL)

    Gender is multi-faceted, always changing, but often contested. It is embedded in our institutions, our actions, our beliefs, and our desires, that it appears to us to be completely natural. Gender is, after all, a system of meaning -- a way of construing notions of male and female – and language is the primary means through ...

  14. Evaluating the Phonology of Nicaraguan Sign Language: Preprimer and Primer Dolch Words

    Science.gov (United States)

    Delkamiller, Julie

    2013-01-01

    Over the past 30-years linguists have been witnessing the birth and evolution of a language, Idioma de Señas de Nicaragua (ISN), in Nicaragua, and have initiated and documented the syntax and grammar of this new language. Research is only beginning to emerge on the implications of ISN on the education of deaf/hard of hearing children in Nicaragua.…

  15. Creating Learning Objects to Enhance the Educational Experiences of American Sign Language Learners: An Instructional Development Report

    Directory of Open Access Journals (Sweden)

    Simone Conceição

    2002-10-01

    Full Text Available Little attention has been given to involving the deaf community in distance teaching and learning or in designing courses that relate to their language and culture. This article reports on the design and development of video-based learning objects created to enhance the educational experiences of American Sign Language (ASL hearing participants in a distance learning course and, following the course, the creation of several new applications for use of the learning objects. The learning objects were initially created for the web, as a course component for review and rehearsal. The value of the web application, as reported by course participants, led us to consider ways in which the learning objects could be used in a variety of delivery formats: CD-ROM, web-based knowledge repository, and handheld device. The process to create the learning objects, the new applications, and lessons learned are described.

  16. Signing of a collaboration agreement between Norwegian University of Science and Technology (NTNU) and CERN

    CERN Multimedia

    AUTHOR|(CDS)2099575

    2017-01-01

    Pro-Rector for Innovation Toril A. Nagelhus Hernes, NTNU, and Director for Accelerators and Technology Frédérick Bordry, CERN, signed on the 19th October 2017 a collaboration agreement between their respective institutions. NTNU and CERN have worked closely together for many years already. With this agreement in place, the collaboration and exchange between the two institutions is expected to become even tighter.

  17. Achievement, Language, and Technology Use Among College-Bound Deaf Learners.

    Science.gov (United States)

    Crowe, Kathryn; Marschark, Marc; Dammeyer, Jesper; Lehane, Christine

    2017-10-01

    Deaf learners are a highly heterogeneous group who demonstrate varied levels of academic achievement and attainment. Most prior research involving this population has focused on factors facilitating academic success in young deaf children, with less attention paid to older learners. Recent studies, however, have suggested that while factors such as early cochlear implantation and early sign language fluency are positively associated with academic achievement in younger deaf children, they no longer predict achievement once children reach high school age. This study, involving data from 980 college-bound high school students with hearing loss, examined relations between academic achievement, communication variables (audiological, language), and use of assistive technologies (e.g., cochlear implants [CIs], FM systems) and other support services (e.g., interpreting, real-time text) in the classroom. Spoken language skills were positively related to achievement in some domains, while better sign language skills were related to poorer achievement in others. Among these college-bound students, use of CIs and academic support services in high school accounted for little variability in their college entrance examination scores. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. The signer and the sign: cortical correlates of person identity and language processing from point-light displays.

    Science.gov (United States)

    Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J

    2011-09-01

    In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer

  19. Technology-enhanced shared reading with deaf and hard-of-hearing children: the role of a fluent signing narrator.

    Science.gov (United States)

    Mueller, Vannesa; Hurtig, Richard

    2010-01-01

    Early shared reading experiences have been shown to benefit normally hearing children. It has been hypothesized that hearing parents of deaf or hard-of-hearing children may be uncomfortable or may lack adequate skills to engage in shared reading activities. A factor that may contribute to the widely cited reading difficulties seen in the majority of deaf children is a lack of early linguistic and literacy exposure that come from early shared reading experiences with an adult who is competent in the language of the child. A single-subject-design research study is described, which uses technology along with parent training in an attempt to enhance the shared reading experiences in this population of children. The results indicate that our technology-enhanced shared reading led to a greater time spent in shared reading activities and sign vocabulary acquisition. In addition, analysis of the shared reading has identified the specific aspects of the technology and the components of the parent training that were used most often.

  20. Technology assisted speech and language therapy.

    Science.gov (United States)

    Glykas, Michael; Chytas, Panagiotis

    2004-06-30

    Speech and language therapists (SLTs) are faced daily with a diversity of speech and language disabilities, which are associated with a variety of conditions ranging from client groups with overall cognitive deficits to those with more specific difficulties. It is desirable that those working with such a range of problems and with such a demanding workload, plan care efficiently. Therefore, the introduction of methodologies, reference models of work and tools, which significantly improve the effectiveness of therapy, are particularly welcome. This paper describes the first web-based tool for diagnosis, treatment and e-Learning in the field of language and speech therapy. The system allows SLTs to find the optimum treatment for each patient, it also allows any non-specialist user-SLT, patient or helper (relative etc.)-to explore their creativity, by designing their own communication aid in an interactive manner, with the use of editors such as: configuration and vocabulary. The system has been tested and piloted by potential users in Greece and the UK.

  1. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  2. The neural correlates of highly iconic structures and topographic discourse in French Sign Language as observed in six hearing native signers.

    Science.gov (United States)

    Courtin, C; Hervé, P-Y; Petit, L; Zago, L; Vigneau, M; Beaucousin, V; Jobard, G; Mazoyer, B; Mellet, E; Tzourio-Mazoyer, N

    2010-09-01

    "Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers. 2010 Elsevier Inc. All rights reserved.

  3. Sign Language and the Learning of Swedish by Deaf Children (Project TSD).

    Science.gov (United States)

    Jansson, Karin, Ed.

    1982-01-01

    A project in Sweden focuses on the early linguistic development of preschool deaf children in families where the parents are also deaf. The School for the Deaf in Sweden is involved with describing the Swedish language as it appears to a deaf learner, a description to be used as a basis for teacher training and inservice in the teaching of the…

  4. Do You See the Signs? Evaluating Language, Branding, and Design in a Library Signage Audit

    Science.gov (United States)

    Stempler, Amy F.; Polger, Mark Aaron

    2013-01-01

    Signage represents more than directions or policies; it is informational, promotional, and sets the tone of the environment. To be effective, signage must be consistent, concise, and free of jargon and punitive language. An efficient assessment of signage should include a complete inventory of existing signage, including an analysis of the types…

  5. Semiotics of Art: Language of Architecture as a Complex System of Signs

    Science.gov (United States)

    Lazutina, Tatyana V.; Pupysheva, Irina N.; Shcherbinin, Mikhail N.; Baksheev, Vladimir N.; Patrakova, Galina V.

    2016-01-01

    This article examines art in the semiotic aspect. The aim of research is to identify the specificity of the language of architecture as a special form of symbolic art meaning the process of granting the symbolic value of aesthetic phenomena caused by the cultural and historical context allowing transmitting the values represented at the level of…

  6. Mobile technologies in teaching a foreign language to non-linguistic major students

    OpenAIRE

    KAPRANCHIKOVA KSENIYA

    2014-01-01

    The paper addresses methodological potential of mobile technologies in teaching a foreign language to non-linguistic students. The author a) gives definition of the term "mobile education", b) suggests a list of mobile technologies used in foreign language teaching; c) develops a list of non-linguistic major students'' language abilities and language skills, which can be developed via mobile technologies.

  7. The English-Language and Reading Achievement of a Cohort of Deaf Students Speaking and Signing Standard English: A Preliminary Study.

    Science.gov (United States)

    Nielsen, Diane Corcoran; Luetke, Barbara; McLean, Meigan; Stryker, Deborah

    2016-01-01

    Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies. A developmental trend (and no plateau) in language and reading achievement was detected; most participants demonstrated average or above-average English. Morphological awareness was prerequisite to high test scores; speech was not significantly correlated with achievement; language proficiency, measured by the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), predicted reading achievement.

  8. Feasibility of integrating other federal information systems into the Global Network of Environment and Technology, GNET{reg_sign}

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-05-01

    The Global Environment and Technology Enterprise (GETE) of the Global Environment and Technology Foundation (GETF) has been tasked by the US Department of Energy`s (DOE), Federal Energy Technology Center (FETC) to assist in reducing DOE`s cost for the Global Network of Environment and Technology (GNET{reg_sign}). As part of this task, GETE is seeking federal partners to invest in GNET{reg_sign}. The authors are also seeking FETC`s commitment to serve as GNET`s federal agency champion promoting the system to potential agency partners. This report assesses the benefits of partnering with GNET{reg_sign} and provides recommendations for identifying and integrating other federally funded (non-DOE) environmental information management systems into GNET{reg_sign}.

  9. Technology-Based Literacy Instruction for English Language Learners

    Science.gov (United States)

    White, Erin L.; Gillard, Sharlett

    2011-01-01

    There is a growing need to implement an alternative and viable solution in U.S. K-12 schools that will address the ever-growing gap that the rapidly growing English language learner (ELL) population presents. This article examines various technology-based solutions, and their potential impact. The systematic implementation of these…

  10. Developing Course Materials for Technology-Mediated Chinese Language Learning

    Science.gov (United States)

    Kubler, Cornelius C.

    2018-01-01

    This article discusses principles involved in developing course materials for technology-mediated Chinese language learning, with examples from a new course designed to take into account the needs of distance and independent learners. Which learning environment is most efficient for a given learning activity needs to be carefully considered. It…

  11. Journal of Language, Technology & Entrepreneurship in Africa - Vol ...

    African Journals Online (AJOL)

    INCORPORATING DIGITAL TECHNOLOGY IN THE TEACHING AND LEARNING OF FRENCH AS A FOREIGN LANGUAGE (FFL) IN TECHNICAL UNIVERSITY OF KENYA · EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT. Teresa Atieno Otieno, 1-11 ...

  12. Rosebud SynCoal Partnership, SynCoal{reg_sign} demonstration technology update

    Energy Technology Data Exchange (ETDEWEB)

    Sheldon, R.W. [Rosebud SynCoal Partnership, Billings, MT (United States)

    1997-12-31

    An Advanced Coal Conversion Process (ACCP) technology being demonstrated in eastern Montana (USA) at the heart of one of the world`s largest coal deposits is providing evidence that the molecular structure of low-rank coals can be altered successfully to produce a unique product for a variety of utility and industrial applications. The product is called SynCoal{reg_sign} and the process has been developed by the Rosebud SynCoal Partnership (RSCP) through the US Department of Energy`s multi-million dollar Clean Coal Technology Program. The ACCP demonstration process uses low-pressure, superheated gases to process coal in vibrating fluidized beds. Two vibratory fluidized processing stages are used to heat and convert the coal. This is followed by a water spray quench and a vibratory fluidized stage to cool the coal. Pneumatic separators remove the solid impurities from the dried coal. There are three major steps to the SynCoal{reg_sign} process: (1) thermal treatment of the coal in an inert atmosphere, (2) inert gas cooling of the hot coal, and (3) removal of ash minerals. When operated continuously, the demonstration plant produces over 1,000 tons per day (up to 300,000 tons per year) of SynCoal{reg_sign} with a 2% moisture content, approximately 11,800b Btu/lb and less than 1.0 pound of SO{sub 2} per million Btu. This product is obtained from Rosebud Mine sub-bituminous coal which starts with 25% moisture, 8,600 Btu/lb and approximately 1.6 pounds of SO{sub 2} per million Btu.

  13. Educational technology "Anatomy and Vital Signs": Evaluation study of content, appearance and usability.

    Science.gov (United States)

    de Góes, Fernanda dos Santos Nogueira; Fonseca, Luciana Mara Monti; de Camargo, Rosangela Andrade Aukar; de Oliveira, Gustavo Faria; Felipe, Helena Reche

    2015-11-01

    The use of new technology has recently grown considerably as an increasing number of college students using Internet. In nursing education, the personal computer and the Internet facilitate teaching theoretical and practical knowledge. Evaluate an educational technology known as "Anatomy and Vital Signs" with respect to content, appearance and usability. This was a first stage evaluation-by specialists to verify content and functioning, prior to a second validation as to learning by students. A methodological study in which instructional technologists (11 participants) and nursing specialists (17 participants) used the technology in an unguided manner and completed three questionnaires. The evaluation was measured by the difference between disagreement and agreement for each statement in the questionnaires. Most of the items were positively evaluated at a level higher than 70% by most of the evaluators except for the following usability criteria: grouping by shape, minimum actions and user control, which did not attain the 70% agreement level among instructional technologists. The evaluation was useful to improve the technology and guarantee suitable product for nursing education. It may be a reliable educational tool for nursing education that applies technological resources. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Sign-Supported English: is it effective at teaching vocabulary to young children with English as an Additional Language?

    Science.gov (United States)

    Marshall, Chloë R; Hobsbaum, Angela

    2015-01-01

    Children who are learning English as an Additional Language (EAL) may start school with smaller vocabularies than their monolingual peers. Given the links between vocabulary and academic achievement, it is important to evaluate interventions that are designed to support vocabulary learning in this group of children. To evaluate an intervention, namely Sign-Supported English (SSE), which uses conventionalized manual gestures alongside spoken words to support the learning of English vocabulary by children with EAL. Specifically, the paper investigates whether SSE has a positive impact on Reception class children's vocabulary development over and above English-only input, as measured over a 6-month period. A total of 104 children aged 4-5 years were recruited from two neighbouring schools in a borough of Outer London. A subset of 66 had EAL. In one school, the teachers used SSE, and in the other school they did not. Pupils in each school were tested at two time points (the beginning of terms 1 and 3) using three different assessments of vocabulary. Classroom-based observations of the teachers' and pupils' manual communication were also carried out. Results of the vocabulary assessments revealed that using SSE had no effect on how well children with EAL learnt English vocabulary: EAL pupils from the SSE school did not learn more words than EAL pupils at the comparison school. SSE was used in almost half of the teachers' observations in the SSE school, while spontaneous gestures were used with similar frequency by teachers in the comparison school. There are alternative explanations for the results. The first is that the use of signs alongside spoken English does not help EAL children of this age to learn words. Alternatively, SSE does have an effect, but we were unable to detect it because (1) teachers in the comparison school used very rich natural gesture and/or (2) teachers in the SSE school did not know enough BSL and this inhibited their use of spontaneous gesture

  15. Impacts of Online Technology Use in Second Language Writing: A Review of the Literature

    Science.gov (United States)

    Lin, Show Mei; Griffith, Priscilla

    2014-01-01

    This article reviews the literature on computer-supported collaborative learning in second language and foreign language writing. While research has been conducted on the effects of online technology in first language reading and writing, this article explores how online technology affects second and foreign language writing. The goal of this…

  16. Znaky pro barvy v českém znakovém jazyce a jejich etymologie / Colour Terms in the Czech Sign Language and Their Etymology

    Directory of Open Access Journals (Sweden)

    Lenka Okrouhlíková

    2016-06-01

    Full Text Available The text deals with the signs in the Czech Sign Language for the basic colours — white, black, red, green, yellow, blue, brown and grey in the diachronic point of view. On the basis of historical written description of these signs from 1834–1907 the motivation of the signs is being analysed (the signs were derived from the typical object of the particular colour as well as the slow lexicalization and form (especially the components of the signs — the place of articulation, handshape and movement. At the same time, the historical signs are compared to the current signs and the text provides analysis of the trends in changes of phonological/morphological structures of the signs (place of articulation changes — moving down from the center to the periphery of the face, shortening of the movement, changing the shape of the hand etc.. In addition the text examines the possible relationship of these signs with the signs for colours in the Austrian, German and French Sign Language (the languages that had been used in deaf education at the end of the 18th and 19th centuries according to preserved records. Concerning the historical signs their motivation and form were compared, along with the detail look at the contemporary signs. This is the first look at the Czech Sign Language from the etymological point of view at all.

  17. A Novel Phonology- and Radical-Coded Chinese Sign Language Recognition Framework Using Accelerometer and Surface Electromyography Sensors.

    Science.gov (United States)

    Cheng, Juan; Chen, Xun; Liu, Aiping; Peng, Hu

    2015-09-15

    Sign language recognition (SLR) is an important communication tool between the deaf and the external world. It is highly necessary to develop a worldwide continuous and large-vocabulary-scale SLR system for practical usage. In this paper, we propose a novel phonology- and radical-coded Chinese SLR framework to demonstrate the feasibility of continuous SLR using accelerometer (ACC) and surface electromyography (sEMG) sensors. The continuous Chinese characters, consisting of coded sign gestures, are first segmented into active segments using EMG signals by means of moving average algorithm. Then, features of each component are extracted from both ACC and sEMG signals of active segments (i.e., palm orientation represented by the mean and variance of ACC signals, hand movement represented by the fixed-point ACC sequence, and hand shape represented by both the mean absolute value (MAV) and autoregressive model coefficients (ARs)). Afterwards, palm orientation is first classified, distinguishing "Palm Downward" sign gestures from "Palm Inward" ones. Only the "Palm Inward" gestures are sent for further hand movement and hand shape recognition by dynamic time warping (DTW) algorithm and hidden Markov models (HMM) respectively. Finally, component recognition results are integrated to identify one certain coded gesture. Experimental results demonstrate that the proposed SLR framework with a vocabulary scale of 223 characters can achieve an averaged recognition accuracy of 96.01% ± 0.83% for coded gesture recognition tasks and 92.73% ± 1.47% for character recognition tasks. Besides, it demonstrats that sEMG signals are rather consistent for a given hand shape independent of hand movements. Hence, the number of training samples will not be significantly increased when the vocabulary scale increases, since not only the number of the completely new proposed coded gestures is constant and limited, but also the transition movement which connects successive signs needs no

  18. Self-Regulated Out-of-Class Language Learning with Technology

    Science.gov (United States)

    Lai, Chun; Gu, Mingyue

    2011-01-01

    Current computer-assisted language learning (CALL) research has identified various potentials of technology for language learning. To realize and maximize these potentials, engaging students in self-initiated use of technology for language learning is a must. This study investigated Hong Kong university students' use of technology outside the…

  19. A Comparison of Discrete Trial Teaching with and without Gestures/Signs in Teaching Receptive Language Skills to Children with Autism

    Science.gov (United States)

    Kurt, Onur

    2011-01-01

    The present study was designed to compare the effectiveness and efficiency of two discrete trial teaching procedures for teaching receptive language skills to children with autism. While verbal instructions were delivered alone during the first procedure, all verbal instructions were combined with simple gestures and/or signs during the second…

  20. History of the College of the Holy Cross American Sign Language Program and Its Collaborative Partnerships with the Worcester Deaf Community

    Science.gov (United States)

    Fisher, Jami N.

    2014-01-01

    Most postsecondary American Sign Language programs have an inherent connection to their local Deaf communities and rely on the community's events to provide authentic linguistic and cultural experiences for their students. While this type of activity benefits students, there is often little effort toward meaningful engagement or attention to…

  1. Evaluating Attributions of Delay and Confusion in Young Bilinguals: Special Insights from Infants Acquiring a Signed and a Spoken Language.

    Science.gov (United States)

    Petitto, Laura Ann; Holowka, Siobhan

    2002-01-01

    Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…

  2. American Sign Language Interpreters Perceptions of Barriers to Healthcare Communication in Deaf and Hard of Hearing Patients.

    Science.gov (United States)

    Hommes, Rachel E; Borash, Amy I; Hartwig, Kari; DeGracia, Donna

    2018-04-25

    Communication barriers between healthcare providers and patients contribute to health disparities and the effectiveness of health promotion messages. This is especially true regarding communication between providers and deaf and hard of hearing (HOH) patients due to lack of understanding of cultural and linguistic differences, ineffectiveness of various means of communication and level of health literacy within that population. This research aimed to identify American Sign Language (ASL) interpreters' perceptions of barriers to effective communication between deaf and HOH patients and healthcare providers. We conducted a survey of ASL interpreters attending the 2015 National Symposium on Healthcare Interpreting with an overall response rate of 25%. Results indicated a significant difference (p communication between providers and deaf/HOH patients as perceived by interpreters. ASL interpreters observed that patients did not understand provider instructions in nearly half of appointments. Eighty-one percent of interpreters said that providers "hardly ever" use "teach-back" methods with patients to ensure understanding. A focus on improving health care and health promotion efforts in the deaf/HOH community depends on improving communication, health literacy, and patient empowerment and involves holding health care organizations accountable for assuring adequate staffing of ASL interpreters and communication resources in order to reduce health disparities in this population.

  3. Football on television: technological evolution and entertainment language

    Directory of Open Access Journals (Sweden)

    Igor José Siquieri Savenhago

    2011-04-01

    Full Text Available The first broadcast of a World Cup footballon television, to Brazil was in 1970, via Embratel. Before that, the people followed the games of the Brazilian team on the radio. Gradually, the owners of television networks realized that football could generate good financial results, with the exposing of advertisements during the broadcasts, similar to what was already done on the radio. Thus, the television, focused on the growth of audience and number of advertisers, covered football with a language of entertainment. The narration of the matches, in which the figure of the narrator is more like that of an entertainer, and improvement of the transmission technologies that improve the image quality every day, take away from football the characteristic of being just a sport to occupy the place of an entertainment. In this context, the sport becomes an article of purchase and sale. The purpose of this study is to demonstrate how this entertainment language was made up on Brazilian television, based on the broadcast sports, especially football, and like the television, which represented a technological leapin the country over the radio, assumed of the sport, country’s most popular as a commodity, interfering with the dynamics of Brazilian society. Finally, an attempt to understand how the researches that allow a technological development change behaviors and vice versa, that is, how the demands of society lead to a race to develop new technologies.

  4. An Ethnographic Study of Chinese Heritage Language Education and Technological Innovations

    Directory of Open Access Journals (Sweden)

    Minjuan Wang

    2004-01-01

    Full Text Available Research has increasingly uncovered the cognitive, cultural, and economic advantages of bilingualism and the positive impact of heritage language on children's second language acquisition (M:cLaughlin, 1995. As one type of heritage language education organizations, Chinese language schools have been in existence for decades in the U.S., but their practices have remained informal and not readily accessible to people from other cultures. In order to bridge this gap, this ethnographic study illustrates family and community involvement in promoting language proficiency in heritage language populations and explores language education methods practiced in Chinese community language schools in an urban Southern California area. The study examines the intricate issues affecting heritage language learning and explores the potential uses of technology in assisting young learners in acquiring their heritage language (Chinese. In addition, the study generates guidelines for adapting existing technology-assisted language programs (e.g., the Chinese Cultural Crystals for instructional uses.

  5. "Great Technology, Football and...": Malaysian Language Learners' Stereotypes about Germany

    Directory of Open Access Journals (Sweden)

    Larisa Nikitina

    2014-12-01

    Full Text Available This study focuses on stereotypes about Germany, its culture and people, held by learners of German in a big public university in Malaysia. It examines not only the stereotypical representations of the target language country but also assesses its favourability and salience, which has not been done previously. The findings revealed that the students' stereotypes about Germany were varied and diverse. Also, they were overwhelmingly positive. The top three salient categories of images about Germany were related to technology, famous personalities - for the most part football players and scientists - and cars. The findings also indicated that very few references had been made to German culture and to its great cultural figures. The results of the present study suggest that students could benefit from a wider and deeper exposure to German culture in the language classroom.

  6. The Advantages and Disadvantages of Computer Technology in Second Language Acquisition

    Science.gov (United States)

    Lai, Cheng-Chieh; Kritsonis, William Allan

    2006-01-01

    The purpose of this article is to discuss the advantages and disadvantages of computer technology and Computer Assisted Language Learning (CALL) programs for current second language learning. According to the National Clearinghouse for English Language Acquisition & Language Instruction Educational Programs' report (2002), more than nine million…

  7. Technology Use and Self-Perceptions of English Language Skills among Urban Adolescents

    Science.gov (United States)

    Li, Jia; Snow, Catherine; Jiang, Jingjing; Edwards, Nicholas

    2015-01-01

    Technology including social media and other technology applications enabled by different technology devices offer many possibilities for second language learners to improve their learning, if they are interested in doing so. We investigated purposes for using technology among urban adolescents, including both English language learners (ELLs) and…

  8. The Question of Sign-Language and the Utility of Signs in the Instruction of the Deaf: Two Papers by Alexander Graham Bell (1898)

    Science.gov (United States)

    Marschark, M.

    2005-01-01

    Alexander Graham Bell is often portrayed as either hero or villain of deaf individuals and the Deaf community. His writings, however, indicate that he was neither, and was not as clearly definite in his beliefs about language as is often supposed. The following two articles, reprinted from The Educator (1898), Vol. V, pp. 3?4 and pp. 38?44,…

  9. Vital Signs - Multiple Languages

    Science.gov (United States)

    ... Well-Being 9 - Thermometer Basics - Amarɨñña / አማርኛ (Amharic) MP3 Siloam Family Health Center Arabic (العربية) Expand Section ... Well-Being 9 - Thermometer Basics - myanma bhasa (Burmese) MP3 Siloam Family Health Center Dari (دری) Expand Section ...

  10. Recent Technological Advances in Natural Language Processing and Artificial Intelligence

    OpenAIRE

    Shah, Nishal Pradeepkumar

    2012-01-01

    A recent advance in computer technology has permitted scientists to implement and test algorithms that were known from quite some time (or not) but which were computationally expensive. Two such projects are IBM's Jeopardy as a part of its DeepQA project [1] and Wolfram's Wolframalpha[2]. Both these methods implement natural language processing (another goal of AI scientists) and try to answer questions as asked by the user. Though the goal of the two projects is similar, both of them have a ...

  11. A framework for sign language recognition using support vector machines and active learning for skin segmentation and boosted temporal sub-units

    OpenAIRE

    Awad, George M.

    2007-01-01

    This dissertation describes new techniques that can be used in a sign language recognition (SLR) system, and more generally in human gesture systems. Any SLR system consists of three main components: Skin detector, Tracker, and Recognizer. The skin detector is responsible for segmenting skin objects like the face and hands from video frames. The tracker keeps track of the hand location (more specifically the bounding box) and detects any occlusions that might happen between any skin objects. ...

  12. Sign language interpreting in legal settings in Flanders : An exploratory study into the experiences of Flemish Deaf people in their contact with the justice system

    OpenAIRE

    Doggen, Carolien

    2016-01-01

    This study is part of the European project Justisigns. The aim of this study is to ascertain experiences of Deaf people when contact was made with the justice system in Flanders. The literature review consists of explanation of the Conventions of United Nations (UN) and the European Union (EU) and the two Directives of the EU concerning the demand for interpreters in police interviews. Furthermore, an overview of the background in relation to accessibility, sign language and Deaf people in Fl...

  13. Language Tasks Using Touch Screen and Mobile Technologies: Reconceptualizing Task-Based CALL for Young Language Learners

    Science.gov (United States)

    Pellerin, Martine

    2014-01-01

    This article examines how the use of mobile technologies (iPods and tablets) in language classrooms contributes to redesigning task-based approaches for young language learners. The article is based on a collaborative action research (CAR) project in Early French Immersion classrooms in the province of Alberta, Canada. The data collection included…

  14. Technological Language as a Common Language for Euro-Mediterranean Population

    Directory of Open Access Journals (Sweden)

    Augusto Sebastio

    2013-12-01

    Full Text Available The internet and social networks provide new forms of public spaces, virtual continents populated by people of different races, languages, and religions that communicate with a single language, in one unique mode and with one unique tool. In the era of extreme social participation, it is impossible not to consider the role of future policies of education. We cannot ignore the basic language in which the Euro-Mediterranean people recognize themselves, allowing them to interact on all sides of the Mediterranean basin. Technology provides a dialogue bridge, as well as mutual recognition and accreditation for the people who share the Mediterranean Sea and the world. The Internet is the true centre of the Union membership and provides a common good, which generates shared recognition and willingness to communicate; furthermore, it results in the renunciation of personal data protection, as well as the management of its powers to private entities. The aim of this paper is to envisage the effects of the electronic society on the Mediterranean Policies.

  15. A Low-Cost Open Source 3D-Printable Dexterous Anthropomorphic Robotic Hand with a Parallel Spherical Joint Wrist for Sign Languages Reproduction

    Directory of Open Access Journals (Sweden)

    Andrea Bulgarelli

    2016-06-01

    Full Text Available We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages’ hand poses for deaf and deaf-blind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community.

  16. Language and Text-to-Speech Technologies for Highly Accessible Language & Culture Learning

    Directory of Open Access Journals (Sweden)

    Anouk Gelan

    2011-06-01

    Full Text Available This contribution presents the results of the “Speech technology integrated learning modules for Intercultural Dialogue” project. The project objective was to increase the availability and quality of e-learning opportunities for less widely-used and less taught European languages using a user-friendly and highly accessible learning environment. The integration of new Text-to-Speech developments into web-based authoring software for tutorial CALL had a double goal: on the one hand increase the accessibility of e-learning packages, also for learners having difficulty reading (e.g. dyslexic learners or preferring auditory learning; on the other hand exploiting some didactic possibilities of this technology.

  17. Health Information National Trends Survey in American Sign Language (HINTS-ASL): Protocol for the Cultural Adaptation and Linguistic Validation of a National Survey.

    Science.gov (United States)

    Kushalnagar, Poorna; Harris, Raychelle; Paludneviciene, Raylene; Hoglind, TraciAnn

    2017-09-13

    The Health Information National Trends Survey (HINTS) collects nationally representative data about the American's public use of health-related information. This survey is available in English and Spanish, but not in American Sign Language (ASL). Thus, the exclusion of ASL users from these national health information survey studies has led to a significant gap in knowledge of Internet usage for health information access in this underserved and understudied population. The objectives of this study are (1) to culturally adapt and linguistically translate the HINTS items to ASL (HINTS-ASL); and (2) to gather information about deaf people's health information seeking behaviors across technology-mediated platforms. We modified the standard procedures developed at the US National Center for Health Statistics Cognitive Survey Laboratory to culturally adapt and translate HINTS items to ASL. Cognitive interviews were conducted to assess clarity and delivery of these HINTS-ASL items. Final ASL video items were uploaded to a protected online survey website. The HINTS-ASL online survey has been administered to over 1350 deaf adults (ages 18 to 90 and up) who use ASL. Data collection is ongoing and includes deaf adult signers across the United States. Some items from HINTS item bank required cultural adaptation for use with deaf people who use accessible services or technology. A separate item bank for deaf-related experiences was created, reflecting deaf-specific technology such as sharing health-related ASL videos through social network sites and using video remote interpreting services in health settings. After data collection is complete, we will conduct a series of analyses on deaf people's health information seeking behaviors across technology-mediated platforms. HINTS-ASL is an accessible health information national trends survey, which includes a culturally appropriate set of items that are relevant to the experiences of deaf people who use ASL. The final HINTS

  18. British Sign Name Customs

    Science.gov (United States)

    Day, Linda; Sutton-Spence, Rachel

    2010-01-01

    Research presented here describes the sign names and the customs of name allocation within the British Deaf community. While some aspects of British Sign Language sign names and British Deaf naming customs differ from those in most Western societies, there are many similarities. There are also similarities with other societies outside the more…

  19. Research for Practice: A Look at Issues in Technology for Second Language Learning

    Science.gov (United States)

    Chapelle, Carol A.

    2010-01-01

    Over the past fourteen years, the pages of "Language Learning & Technology" have been filled with examples of research that take up the challenge of investigating second language learning through technology. It has been a period of expansion and growth in many ways. The expansion of technologies as well as their acceptance and use in language…

  20. The Lab of the Future: Using Technology to Teach Foreign Language.

    Science.gov (United States)

    Underwood, John H.

    1993-01-01

    Describes the role of technology in teaching foreign languages. Offers a brief history of language lab technologies, including computer use for drill-and-practice, text reconstruction, and simulations and games. Discusses tool programs, intelligent systems, video technology, satellite television, videodisc and interactive video, hypertext and…