Hernández, Cesar; Pulido, Jose L; Arias, Jorge E
To develop a technological tool that improves the initial learning of sign language in hearing impaired children. The development of this research was conducted in three phases: the lifting of requirements, design and development of the proposed device, and validation and evaluation device. Through the use of information technology and with the advice of special education professionals, we were able to develop an electronic device that facilitates the learning of sign language in deaf children. This is formed mainly by a graphic touch screen, a voice synthesizer, and a voice recognition system. Validation was performed with the deaf children in the Filadelfia School of the city of Bogotá. A learning methodology was established that improves learning times through a small, portable, lightweight, and educational technological prototype. Tests showed the effectiveness of this prototype, achieving a 32 % reduction in the initial learning time for sign language in deaf children.
Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam
This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…
This systematic review of the literature provides a synthesis of research on the use of technology to support sign language. Background research on the use of sign language with students who are deaf/hard of hearing and students with low incidence disabilities, such as autism, intellectual disability, or communication disorders is provided. The…
... from the intermixing of local sign languages and French Sign Language (LSF, or Langue des Signes Française). ... phrases with similar neural mechanisms as when we speak, new study finds New York University ( 4/03/ ...
Hiddinga, A.; Crasborn, O.
Deaf people who form part of a Deaf community communicate using a shared sign language. When meeting people from another language community, they can fall back on a flexible and highly context-dependent form of communication called international sign, in which shared elements from their own sign
Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…
Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…
Bakken Jepsen, Julie
A name sign is a personal sign assigned to deaf, hearing impaired and hearing persons who enter the deaf community. The mouth action accompanying the sign reproduces all or part of the formal first name that the person has received by baptism or naming. Name signs can be compared to nicknames...... in spoken languages, where a person working as a blacksmith by his friends might be referred to as ‘The Blacksmith’ (‘Here comes the Blacksmith!’) instead of using the person’s first name. Name signs are found not only in Danish Sign Language (DSL) but in most, if not all, sign languages studied to date....... This article provides examples of the creativity of the users of Danish Sign Language, including some of the processes in the use of metaphors, visual motivation and influence from Danish when name signs are created....
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.
The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…
Kimmelman, V.; Pfau, R.; Féry, C.; Ishihara, S.
This chapter demonstrates that the Information Structure notions Topic and Focus are relevant for sign languages, just as they are for spoken languages. Data from various sign languages reveal that, across sign languages, Information Structure is encoded by syntactic and prosodic strategies, often
de Vos, C.; Pfau, R.
Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western
Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...... as a review of a sign language dictionary that has recently been published in the Netherlands...
Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.
Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…
Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…
This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...... or team of experts in the particular sign language, at the invitation of the editors. Thirty-eight different deaf sign languages and alternate sign languages from every continent are represented, and over seventy international deaf and hearing scholars have contributed to the volume....
Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor
This paper examines the possible relationship between proposed social determinants of morphological 'complexity' and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological 'complexification'), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.
Baker, A.; van den Bogaerde, B.; Pfau, R.; Schermer, T.
How different are sign languages across the world? Are individual signs and signed sentences constructed in the same way across these languages? What are the rules for having a conversation in a sign language? How do children and adults learn a sign language? How are sign languages processed in the
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
The entries of the The Danish Sign Language Dictionary have four sections: Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons. Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through: Handshape: Particular handshapes for the active and the passive hand can be specified...... to find signs that are not themselves lemmas in the dictionary, but appear in example sentences. Topic: Topics can be chosen as search criteria from a list of 70 topics....
Kimmelman, V.; Paperno, D.; Keenan, E.L.
After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,
Goldin-Meadow, Susan; Brentari, Diane
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Signers use their body and the space in front of them iconically. Does iconicity lead to the same mapping strategies in construing space during interaction across sign languages? The present study addressed this question by conducting an experimental study on basic static and motion event descriptions during interaction (describer input and addressee re-signing/retelling) in American Sign Language, Croatian Sign Language, and Turkish Sign Language. I found that the three sign languages are si...
This article examines the genesis and implementation of a dictionary project for sign language, the Zambian Sign Language Dictionary Project, regarded as a first step towards the development of a Zambian National Sign Language. The article highlights the specificity of sign language lexicography. Keywords: american ...
Tyrone, Martha E.; Mauk, Claude E.
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in ...
Measures of lexical frequency presuppose the existence of corpora, but true machine-readable corpora of sign languages (SLs) are only now being created. Lexical frequency ratings for SLs are needed because there has been a heavy reliance on the interpretation of results of psycholinguistic and neurolinguistic experiments in the SL research…
Mason, David G.
This article promotes the utilization of Sign Language of the Deaf as a primary and secondary research language. The article discusses English as the traditional research language, the role of sign language in bilingualism, possible uses for American Sign Language (ASL) as a research language, and the availability of ASL-based literature for…
This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…
De Meulder, Maartje
This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…
This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…
Cagle, Keith Martin
American Sign Language (ASL) is the natural and preferred language of the Deaf community in both the United States and Canada. Woodward (1978) estimated that approximately 60% of the ASL lexicon is derived from early 19th century French Sign Language, which is known as "langue des signes francaise" (LSF). The lexicon of LSF and ASL may…
Schmaling, Constanze H.
This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…
Turner, Graham H.
This article introduces the present collection of sign language planning studies. Contextualising the analyses against the backdrop of core issues in the theory of language planning and the evolution of applied sign linguistics, it is argued that--while the sociolinguistic circumstances of signed languages worldwide can, in many respects, be…
Kaneko, Michiko; Mesch, Johanna
This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…
advancements in computing technologies have the potential to be applied in the field of SL recognition. These computer-based approaches are able to translate the SL into verbal language and vice-versa. This paper describes the development of a dataset for an automated. SL recognition system based on the Malaysian ...
van Loon, E.; Pfau, R.; Steinbach, M.; Müller, C.; Cienki, A.; Fricke, E.; Ladewig, S.H.; McNeill, D.; Bressem, J.
Recent studies on grammaticalization in sign languages have shown that, for the most part, the grammaticalization paths identified in sign languages parallel those previously described for spoken languages. Hence, the general principles of grammaticalization do not depend on the modality of language
Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... to include in the dictionary. This depends of course on the target user group, but when you aim at fulfilling the needs of several different user groups, which is what an all-round dictionary must do, you easily risk falling between not two, but several stools. When editing the DTS Dictionary we often face...
Ten Holt, G.A.
Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors,
Smith, Cynthia; Morgan, Robert L.
There have been increasing incidents of innocent people who use American Sign Language (ASL) or another form of sign language being victimized by gang violence due to misinterpretation of ASL hand formations. ASL is familiar to learners with a variety of disabilities, particularly those in the deaf community. The problem is that gang members have…
Full Text Available , record and send video messages and video conferencing ʻfrom the streetʼ (Figure 2). • • • • • • • • • • Public TV displays: To provide support for SASL information and advertising. Online training: Video and computer animated Sign Language... forward for Sign Language technologies Figure 2: Results of the research conducted for project ʻEnabling Environmentsʼ suggested upgrading to video public phones. (Concept art by B. Smith). Figure 3: Surface computing can be used to embed new...
Tyrone, Martha E.; Mauk, Claude E.
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech. PMID:20607146
There is a system of English mouthing during interpretation that appears to be the result of language contact between spoken language and signed language. English mouthing is a voiceless visual representation of words on a signer's lips produced concurrently with manual signs. It is a type of borrowing prevalent among English-dominant…
This dissertation explores Information Structure in two sign languages: Sign Language of the Netherlands and Russian Sign Language. Based on corpus data and elicitation tasks we show how topic and focus are expressed in these languages. In particular, we show that topics can be marked syntactically
"Grandfather Moose" rhymes, written to follow the Mother Goose tradition, are short, appealing, easy-to-memorize sign language nursery rhymes which employ visual poetic devices such as similar signs and transitional flow of movement. (CB)
Aronoff, Mark; Meir, Irit; Sandler, Wendy
Sign languages have two strikingly different kinds of morphological structure: sequential and simultaneous. The simultaneous morphology of two unrelated sign languages, American and Israeli Sign Language, is very similar and is largely inflectional, while what little sequential morphology we have found differs significantly and is derivational. We show that at least two pervasive types of inflectional morphology, verb agreement and classifier constructions, are iconically grounded in spatiotemporal cognition, while the sequential patterns can be traced to normal historical development. We attribute the paucity of sequential morphology in sign languages to their youth. This research both brings sign languages much closer to spoken languages in their morphological structure and shows how the medium of communication contributes to the structure of languages.* PMID:22223926
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Rosen, Russell S.
There is an exponential growth in the number of schools that offer American Sign Language (ASL) for foreign language credit and the different ASL curricula that were published. This study analyzes different curricula in its assumptions regarding language, learning, and teaching of second languages. It is found that curricula vary in their…
Enns, C. J.; Herman, R.
Signed languages continue to be a key element of deaf education programs that incorporate a bilingual approach to teaching and learning. In order to monitor the success of bilingual deaf education programs, and in particular to monitor the progress of children acquiring signed language, it is essential to develop an assessment tool of signed language skills. Although researchers have developed some checklists and experimental tests related to American Sign Language (ASL) assessment, at this t...
Wang, Jihong; Napier, Jemina
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…
Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.
Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…
Campbell, Ruth; MacSweeney, Mairead; Waters, Dafydd
How are signed languages processed by the brain? This review briefly outlines some basic principles of brain structure and function and the methodological principles and techniques that have been used to investigate this question. We then summarize a number of different studies exploring brain activity associated with sign language processing…
Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny
This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…
Corina, David P.; Hafer, Sarah; Welch, Kearnan
This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…
McKee, David; McKee, Rachel; Major, George
Lexical variation abounds in New Zealand Sign Language (NZSL) and is commonly associated with the introduction of the Australasian Signed English lexicon into Deaf education in 1979, before NZSL was acknowledged as a language. Evidence from dictionaries of NZSL collated between 1986 and 1997 reveal many coexisting variants for the numbers from one…
Emmorey, Karen; McCullough, Stephen; Brentari, Diane
Two experiments examined whether Deaf signers or hearing nonsigners exhibit categorical perception (CP) for hand configuration or for place of articulation in American Sign Language. Findings that signers and nonsigners performed similarly suggests that these categories in American Sign Language have a perceptual as well as a linguistic basis.…
Full Text Available In both sign and spoken languages, locative relations tend to be encoded within constructions that display the non-basic word/sign order. In addition, in such an environment, sign languages habitually use a distinct predicate type – a classifier predicate – which may independently affect the order of constituents in the sentence. In this paper, I present Slovenian Sign Language (SZJ locative constructions, in which (i the argument that enables spatial anchoring (“ground” precedes both the argument that requires spatial anchoring (“figure” and the predicate. At the same time, (ii the relative order of the figure with respect to the predicate depends on the type of predicate employed: a non-classifier predicate precedes the figure, while a classifier predicate only comes after the figure.
In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration. On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...
Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.
Karolina Olga NurzyÃ…Â„ska
Full Text Available The aim of this study is to present an overview about computer singed language course with module for automatic signed language recognition as a part of language acquisition test. The idea to create an interactive sign language learning system seems to be a new one. We hope that this solution helps to overcome the barrier between the silent and hearing world. On the other hand, we concentrate our efforts to create a system for a home use that will not need any sophisticated hardware. Moreover, we put pressure on utilization of already proposed and popular description scheme. The MPEG-7 standard formally called the Multimedia Content Description Interface has been chosen. This standard provides a rich set of tools for complete multimedia content description. The most important application for sign language is the possibilities to describe static and dynamic features of objects in image sequences both. This description schema gives the opportunity to create description of signing person on required level of granularity. In the article a brief description of many suggested solutions for semiautomatic or automatic sign language recognition systems is given. Besides, there are described some implemented learning application which aim was to learn sign languages. The main groups, which could be distinguished are: animated avatars observation, messenger for deaf people, testing progress in learning sign languages by using education platforms.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
Pfau, R.; Steinbach, M.; Pfau, R.; Steinbach, M.; Herrmann, A.
Sign language grammars, just like spoken language grammars, generally provide various means to generate different kinds of complex syntactic structures including subordination of complement clauses, adverbial clauses, or relative clauses. Studies on various sign languages have revealed that sign
Oct 5, 2017 ... (IR) sensor and image processing algorithm. 2. LITERATURE REVIEW. 2.1. Works on Constructing Databases for SL Recognition. Usually, standard and complete signs video, point skeleton and depth stream database is connected with SLR for any research. Based on different SL in the world, researchers ...
Marilyn Mafra Klamt
Full Text Available ABSTRACT The idea of sonority in sign languages was treated by Perlmutter (1992 as perceptibility, a property of a segment that uses movement rather than one in which the hands stay in the same position. Sandler (1993 states that the visual salience of movement in sign languages plays a role similar to sonority in spoken languages. For Brentari (1998, perceptually, a sign is visible from considerable distances, and measurement of its visual sonority is based on the joints involved in its production. This work focuses on visual sonority in literature in Brazilian Sign Language and considers the relevance of manual and non-manual elements, rhythm, symmetry, the scale of signs, and the effect of video on this concept. Two signed stories “The King’s Parrot” and “Little Ping Pong Ball” were analysed, highlighting specific signs in which the use of joints, non-manual features, and other resources are influenced by the size of the performance space and the distance of the audience from the signing. Three types of ‘sonority’ were observed: in the movement of the whole body on the stage, in the size of arms and trunk movement, and in the hands. In addition to the joints, non-manual features, rhythm and symmetry play an important role in visual sonority and influence the viewer’s experience.
Weaver, Kimberly A.; Starner, Thad
The majority of deaf children in the United States are born to hearing parents with limited prior exposure to American Sign Language (ASL). Our research involves creating and validating a mobile language tool called SMARTSign. The goal is to help hearing parents learn ASL in a way that fits seamlessly into their daily routine. (Contains 3 figures.)
South Africa have been affected by the policies of apartheid, and its educational and linguistic consequences, in a .... teaching strategies, and more recently of the perception that a signed language is a manual form ... of color and on the basis of the (former) official spoken languages designated by the apartheid education ...
Thumann, Mary Agnes
This dissertation examines depiction in American Sign Language (ASL) presentations. The impetus for this study came from my work as an instructor in an interpreter education program. The majority of ASL/English interpreters are second language learners of ASL, and many of them find some features of ASL challenging to learn. These features are…
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic primin...
de Bruin, Ed; Brugmans, Petra
Specialized psychotherapy for deaf people in the Dutch and Western European mental health systems is still a rather young specialism. A key policy principle in Dutch mental health care for the deaf is that they should receive treatment in the language most accessible to them, which is usually Dutch Sign Language (Nederlandse Gebarentaal or NGT). Although psychotherapists for the deaf are trained to use sign language, situations will always arise in which a sign language interpreter is needed. Most psychotherapists have the opinion that working with a sign language interpreter in therapy sessions can be a valuable alternative option but also see it as a second-best solution because of its impact on the therapeutic process. This paper describes our years of collaborationship as a therapist and a sign language interpreter. If this collaborationship is optimal, it can generate a certain "therapeutic power" in the therapy sessions. Achieving this depends largely on the interplay between the therapist and the interpreter, which in our case is the result of literature research and our experiences during the last 17 years. We analyze this special collaborative relationship, which has several dimensions and recurrent themes like, the role conception of the interpreter, situational interpreting, organizing the interpretation setting, or managing therapeutic phenomena during therapy sessions.
Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Hall, Matthew L.; Ferreira, Victor S.; Mayberry, Rachel I.
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech. PMID:25786230
Matthew L Hall
Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.
Mann, Wolfgang; Roy, Penny; Morgan, Gary
This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…
There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…
Enns, Charlotte J.; Herman, Rosalind C.
Signed languages continue to be a key element of deaf education programs that incorporate a bilingual approach to teaching and learning. In order to monitor the success of bilingual deaf education programs, and in particular to monitor the progress of children acquiring signed language, it is essential to develop an assessment tool of signed…
Notarrigo, Ingrid; Meurant, Laurence; Van Herreweghe, Mieke; Vermeerbergen, Myriam
Repetition was described in the nineties by a limited number of sign linguists: Vermeerbergen & De Vriendt (1994) looked at a small corpus of VGT data, Fisher & Janis (1990) analysed “verb sandwiches” in ASL and Pinsonneault (1994) “verb echos” in Quebec Sign Language. More recently the same phenomenon has been the focus of research in a growing number of signed languages, including American (Nunes and de Quadros 2008), Hong Kong (Sze 2008), Russian (Shamaro 2008), Polish (Flilipczak and Most...
Mann, W.; Roy, P.; Morgan, G.
This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with twenty deaf native ASL signers. The web-based test assesses the strength of deaf children’s vocabulary knowledge by means of different mappings of phonological form and meaning of signs. The adaptation from BSL to ASL involved nine stages, which included forming a panel of deaf/hearing exper...
Dobel, Christian; Enriquez-Geppert, Stefanie; Hummert, Marja; Zwitserlood, Pienie; Bölte, Jens
The idea that knowledge of events entails a universal spatial component, that is conceiving agents left of patients, was put to test by investigating native users of German sign language and native users of spoken German. Participants heard or saw event descriptions and had to illustrate the meaning of these events by means of drawing or arranging toys. Two types of verbs were tested, differing in the way they are signed. Verbs with a horizontal transient are typically signed with a left-to-right directionality, from the addressee's point of view. In contrast, verbs with sagittal transients display transitions moving toward or away from speaker. Signers showed a direct mapping preference for verbs with horizontal transients, by putting agents at the same position in space as in the signed message (i.e., mirroring signing space). No such effect was found for verbs with sagittal transients. In all, the data fit with the idea that interpretations of signed or spoken languages are modulated by task and culture as well as language-related factors and constraints. © The Author 2011. Published by Oxford University Press. All rights reserved.
Cate, Hardie; Hussain, Zeshan
We outline a bidirectional translation system that converts sentences from American Sign Language (ASL) to English, and vice versa. To perform machine translation between ASL and English, we utilize a generative approach. Specifically, we employ an adjustment to the IBM word-alignment model 1 (IBM WAM1), where we define language models for English and ASL, as well as a translation model, and attempt to generate a translation that maximizes the posterior distribution defined by these models. T...
Brentari, Diane; Coppola, Marie
How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.
Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso
In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…
Radford, Curt L.
Advances in technology have significantly influenced educational delivery options, particularly in the area of American Sign Language (ASL) instruction. As a result, ASL online courses are currently being explored in higher education. The review of literature remains relatively unexplored regarding the effectiveness of learning ASL online. In…
In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…
This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…
Dobel, Christian; Enriquez-Geppert, Stefanie; Hummert, Marja; Zwitserlood, Pienie; Bolte, Jens
The idea that knowledge of events entails a universal spatial component, that is conceiving agents left of patients, was put to test by investigating native users of German sign language and native users of spoken German. Participants heard or saw event descriptions and had to illustrate the meaning of these events by means of drawing or arranging…
Nicodemus, Brenda; Emmorey, Karen
Spoken language (unimodal) interpreters often prefer to interpret from their non-dominant language (L2) into their native language (L1). Anecdotally, signed language (bimodal) interpreters express the opposite bias, preferring to interpret from L1 (spoken language) into L2 (signed language). We conducted a large survey study ("N" =…
Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…
Poetry in a sign language can make use of literary devices just as poetry in a spoken language can. The study of literary expression in sign languages has increased over the last twenty years and for South African Sign Language (SASL) such literary texts have also become more available. This article gives a brief overview ...
Full Text Available In the communication of deaf people between themselves and hearing people there are three basic aspects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The gesture and the movements pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them phonemes. He created three main phoneme categories: hand position, location and movement.Sign languages as spoken languages have background from the distant past. They developed parallel with the development of spoken language and undertook many historical changes. Therefore, today they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.
Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.
Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse
Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.
Abstract. People with speech disabilities communicate in sign language and therefore have trouble in mingling with the able-bodied. There is a need for an interpretation system which could act as a bridge between them and those who do not know their sign language. A functional unobtrusive Indian sign language ...
People with speech disabilities communicate in sign language and therefore have trouble in mingling with the able-bodied. There is a need for an interpretation system which could act as a bridge between them and those who do not know their sign language. A functional unobtrusive Indian sign language recognition ...
Barnes, Susan Kubic
Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…
There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired in preschool- and school-aged children (4-8 years old) is urgently needed. Using the British Sign Language Receptive Skills Test, that has been standardized and has sound psychometric properties, as a template for adaptation thus provides a starting point for tests of a sign language that is less documented, such as DGS. This article makes a novel contribution to the field by examining linguistic, cultural, and methodological issues in the process of adapting a test from the source language to the target language. The adapted DGS test has sound psychometric properties and provides the basis for revision prior to standardization. © The Author 2011. Published by Oxford University Press. All rights reserved.
Pfau, R.; Steinbach, M.; Herrmann, A.
Since natural languages exist in two different modalities - the visual-gestural modality of sign languages and the auditory-oral modality of spoken languages - it is obvious that all fields of research in modern linguistics will benefit from research on sign languages. Although previous studies have
Shield, Aaron; Cooley, Frances; Meier, Richard P.
Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…
if and how manual question words are used. DSL uses distinct nonmanual signals to mark content and polar questions and my findings reveal a rich system of both manual and non-manual markers. The manual question words in DSL form a large paradigm of at least six items. The syntactic position of the manual...... question words can vary, though they usually appear sentence finally. The nonmanual signals include specific facial expressions, head posture and mouthing. Some of the features are shared with other sign languages. Furthermore, although it has not been investigated in detail it seems that the nonmanual...
Joy, Jestin; Balakrishnan, Kannan
Sign language, which is a medium of communication for deaf people, uses manual communication and body language to convey meaning, as opposed to using sound. This paper presents a prototype Malayalam text to sign language translation system. The proposed system takes Malayalam text as input and generates corresponding Sign Language. Output animation is rendered using a computer generated model. This system will help to disseminate information to the deaf people in public utility places like ra...
Hauser, Peter C.; Paludneviciene, Raylene; Riddle, Wanda; Kurz, Kim B.; Emmorey, Karen; Contreras, Jessica
The American Sign Language Comprehension Test (ASL-CT) is a 30-item multiple-choice test that measures ASL receptive skills and is administered through a website. This article describes the development and psychometric properties of the test based on a sample of 80 college students including deaf native signers, hearing native signers, deaf…
Shaw, Emily; Delaporte, Yves
Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…
Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
de Quadros, Ronice Muller
This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…
Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.
American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…
a case for the existence of a Kiswahili sign language since KSL is a natural language with its own autonomous grammar distinct from that of any spoken language. In this paper, we shall argue that the Kiswahili mouthed KSL signs are an outcome of contact between KSL – Kiswahili bilinguals and their hearing Kiswahili ...
The Online Dictionary of New Zealand Sign Language (ODNZSL),1 launched in 2011, is n example of a contemporary sign language dictionary that leverages the 21st century advantages of a digital medium and an existing body of descriptive research on the language, including a small electronic corpus of New Zealand ...
Abstract: The Online Dictionary of New Zealand Sign Language (ODNZSL),1 launched in 2011, is an example of a contemporary sign language dictionary that leverages the 21st century advantages of a digital medium and an existing body of descriptive research on the language, including a small electronic corpus of New ...
Ortega, G.; Morgan, G.
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Carr, Edward G.
Three questions regarding the use of sign language as an alternative communication system for nonverbal autistic children are examined. Data on effects on speech, the upper limits of sign acquisition, and effects on adaptive function are discussed. (Author/CL)
Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott
There is no evidence that learning a natural human language is cognitively harmful to children. To the contrary, multilingualism has been argued to be beneficial to all. Nevertheless, many professionals advise the parents of deaf children that their children should not learn a sign language during their early years, despite strong evidence across many research disciplines that sign languages are natural human languages. Their recommendations are based on a combination of misperceptions about (1) the difficulty of learning a sign language, (2) the effects of bilingualism, and particularly bimodalism, (3) the bona fide status of languages that lack a written form, (4) the effects of a sign language on acquiring literacy, (5) the ability of technologies to address the needs of deaf children and (6) the effects that use of a sign language will have on family cohesion. We expose these misperceptions as based in prejudice and urge institutions involved in educating professionals concerned with the healthcare, raising and educating of deaf children to include appropriate information about first language acquisition and the importance of a sign language for deaf children. We further urge such professionals to advise the parents of deaf children properly, which means to strongly advise the introduction of a sign language as soon as hearing loss is detected. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.
Johnson, William L
Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Full Text Available example, between a deaf person who can sign and an able person or a person with a different disability who cannot sign). METHODOLOGY A signing avatar is set up to work together with a chatterbot. The chatterbot is a natural language dialogue interface... recognition computational model). * Contemporary sign language dictionaries only work one way, looking for a word and finding the associated gesture. The proposed technology would allow the sketch up of a gesture and find all related signs that closest...
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure. PMID:26619066
The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.
McKee, Rachel Locker; Manning, Victoria
Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
Ortega, Gerardo; Morgan, Gary
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…
Wilbur, Ronnie; Kak, Avinash C.
Development of automatic recognition systems for American Sign Language (ASL) needs a comprehensive database that provides a range of signed material under controlled and less-controlled lighting conditions. The database we created contains (a) handshapes in isolation and in single signs, (b) the American fingerspelling alphabet, (c) numbers, (d) movement in single signs, and (e) examples of short discourse narratives for testing sign recognition in connected linguistic contexts. All of th...
Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Ferjan Ramirez, Naja; Leonard, Matthew K.; Davenport, Tristan S.; Torres, Christina; Halgren, Eric; Mayberry, Rachel I.
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772–2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. PMID:25410427
This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…
De Clerck, Goedele A. M.
This article has been excerpted from "Introduction: Sign Language, Sustainable Development, and Equal Opportunities" (De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck & P. V. Paul (Eds.) 2016). The idea of exploring various…
Tree, Erich Fox
This article examines sign languages that belong to a complex of indigenous sign languages in Mesoamerica that K'iche'an Maya people of Guatemala refer to collectively as Meemul Tziij. It explains the relationship between the Meemul Tziij variety of the Yukatek Maya village of Chican (state of Yucatan, Mexico) and the hitherto undescribed Meemul…
Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.
This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.
This comparative study on the Sign Language of the Netherlands (NGT) and Italian Sign Language (LIS) addresses and discusses a wide range of aspects regarding grammar the position of the adjectives, numerals and demonstratives, the linear ordering of selected aspectual markers and modals,
Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English…
Kimmelman, V.; Vink, L.
Several sign languages of the world utilize a construction that consists of a question followed by an answer, both of which are produced by the same signer. For American Sign Language, this construction has been analyzed as a discourse-level rhetorical question construction (Hoza et al. 1997), as a
Rinaldi, Pasquale; Caselli, Maria Cristina; Di Renzo, Alessio; Gulli, Tiziana; Volterra, Virginia
Lexical comprehension and production is directly evaluated for the first time in deaf signing children below the age of 3 years. A Picture Naming Task was administered to 8 deaf signing toddlers (aged 2-3 years) who were exposed to Sign Language since birth. Results were compared with data of hearing speaking controls. In both deaf and hearing…
Carr, Edward G.
The acquisition of expressive sign language was studied in four autistic children (ages 10-15 years). Ss were taught expressive sign labels for common objects using a training procedure consisting of prompting, fading, and stimulus totation. The signing of three of the Ss was found to be controlled solely by the visual cues associated with the…
Akmese, Pelin Pistav
Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…
Tyrone, Martha E; Mauk, Claude E
Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.
Peng, Fred C. C., Ed.
A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…
Williams, Joshua T.; Newman, Sharlene D.
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Originally used by specialists concerned with Slavic languages, in which aspect plays a key role, the concept of aspect has been shown to have significant implications for many other languages. In this article, the use of aspect and of aspectual markers in American Sign Language (ASL) will be explored. The argument to be ...
Pfau, R.; Aboh, E.O.
In investigations of sign language grammar - phonology, morphology, and syntax - the impact of language modality on grammar is a recurrent issue. The term 'modality,' as used in this context, refers to the distinction between languages that are expressed and perceived in the oral-auditive modality
Sprenger, Kristen; Mathur, Gaurav
This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…
Hanson-Smith, Elizabeth, Ed.; Rilling, Sarah, Ed.
While posing important questions about how learning proceeds with new technologies, this volume demonstrates how teachers captivate the imagination of learners, from schoolchildren to postgraduates, by providing real-world purposes for language. The authors are from educational institutions in many regions of the world, and describe technology use…
It is also claimed that with modern technology, the tendency toward enjoying computer systems in education has also increased and it has affected the way of learning. EFL points in EFL classes. It is believed that the computer and also information technology was introduced in language laboratories to improve the learners' ...
Sun, Chao; Zhang, Tianzhu; Bao, Bing-Kun; Xu, Changsheng; Mei, Tao
Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.
Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
Gameiro, João Manuel Ferreira
Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar soluti...
Drawing upon ethnographic research conducted in urban locations in India, I consider the relationship between stigma and contagion in the context of deaf peoples' desires for and practices of communication in Indian Sign Language. If sign language can be considered or represented as a virus-and if it spreads between and among deaf people upon exposure-what might cure differentially look like, in a time when cochlear implantation and oral-based early intervention is increasingly becoming normalized? Considering the impact of stigma on multiple forms of relationality, I argue that sign language's viral potentiality lies in its ability to transform and create new relationships and worlds.
Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…
This article offers the first overview of the recent emergence of Tibetan Sign Language (TibSL) in Lhasa, capital of the Tibet Autonomous Region (TAR), China. Drawing on short anthropological fieldwork, in 2007 and 2014, with people and organisations involved in the formalisation and promotion of TibSL, the author discusses her findings within the nine-fold UNESCO model for assessing linguistic vitality and endangerment. She follows the adaptation of this model to assess signed languages by the Institute of Sign Languages and Deaf Studies (iSLanDS) at the University of Central Lancashire. The appraisal shows that TibSL appears to be between "severely" and "definitely" endangered, adding to the extant studies on the widespread phenomenon of sign language endangerment. Possible future influences and developments regarding the vitality and use of TibSL in Central Tibet and across the Tibetan plateau are then discussed and certain additions, not considered within the existing assessment model, suggested. In concluding, the article places the situation of TibSL within the wider circumstances of minority (sign) languages in China, Chinese Sign Language (CSL), and the post-2008 movement to promote and use "pure Tibetan language".
van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of the Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives…
Ruben, Robert J
The development of conceptualization of a biological basis of language during the 20th century has come about, in part, through the appreciation of the central nervous system's ability to utilize varied sensory inputs, and particularly vision, to develop language. Sign language has been a part of the linguistic experience from prehistory to the present day. Data suggest that human language may have originated as a visual language and became primarily auditory with the later development of our voice/speech tract. Sign language may be categorized into two types. The first is used by individuals who have auditory/oral language and the signs are used for special situations, such as communication in a monastery in which there is a vow of silence. The second is used by those who do not have access to auditory/oral language, namely the deaf. The history of the two forms of sign language and the development of the concept of the biological basis of language are reviewed from the fourth century BC to the present day. Sign languages of the deaf have been recognized since at least the fourth century BC. The codification of a monastic sign language occurred in the seventh to eighth centuries AD. Probable synergy between the two forms of sign language occurred in the 16th century. Among other developments, the Abbey de L'Epée introduced, in the 18th century, an oral syntax, French, into a sign language based upon indigenous signs of the deaf and newly created signs. During the 19th century, the concept of a "critical" period for the acquisition of language developed; this was an important stimulus for the exploration of the biological basis of language. The introduction of techniques, e.g. evoked potentials and functional MRI, during the 20th century allowed study of the brain functions associated with language.
The article explores sign language interpreter training, testing, and accreditation in three major English-speaking countries, Australia, the United Kingdom, and the United States, by providing an overview of the training and assessment of sign language interpreters in each country. The article highlights the reasons these countries can be considered leaders in the profession and compares similarities and differences among them. Key similarities include the provision of university interpreter training, approval for training courses, license "maintenance" systems, and educational interpreting guidelines. Differences are noted in relation to training prerequisites, types and levels of accreditation, administration of the testing system, and accreditation of deaf interpreters. The article concludes with predictions about future developments related to the establishment of the World Association of Sign Language Interpreters and the development of sign language interpreting research as a research discipline.
Hollman, Liivi; Sutrop, Urmas
The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…
Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen
We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…
Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.
Naomi Kenney Caselli
Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Kim, Jonghwa; Wagner, Johannes; Rehm, Matthias
In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel sensor...
Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette
This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…
Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy
A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…
Stokoe, William C.
"Verbal" and "nonverbal" are confused and confusing terms. Gestural phenomena in semiotic use--gSigns--are called nonverbal but work in three major ways, only the first of which is unrelated to the highly encoded (verbal) activity called language. A gSign may: (1) have a general meaning: "yes,""no,""who…
In this dissertation, I examine the nature of object marking in American Sign Language (ASL). I investigate object marking by means of directionality (the movement of the verb towards a certain location in signing space) and by means of handling classifiers (certain handshapes accompanying the verb). I propose that object marking in ASL is…
Benedicto, E.; Cvejanov, S.; Quer, J.; Quer, J.F.
This paper provides a comparative analysis of the structural properties of serial verb constructions (SVC) in three sign languages: LSA (Lengua de Señas Argentina, Argentinean Sign Language), LSC (Llengua de Signes Catalana, Catalan Sign Language) and ASL (American Sign Language). The paper presents
JOURNAL OF LANGUAGE, TECHNOLOGY & ENTREPRENEURSHIP IN AFRICA. Vol.7. No.1. 2016. 116. WOMEN AND WAR: DECONSTRUCTING THE NOTION OF VICTIMS ... the military dictatorship in Argentina (1976-1983). Ali's historical analysis is divided into three stages: Pre-Colonial, Colonial and Post-. Colonial ...
Malaia, Evie; Wilbur, Ronnie
Using sign language research as an example, we argue that both the cross-linguistic descriptive approach to data, advocated by Evans and Levinson (2009), as well as abstract ('formal') analyses are necessary steps towards the development of "neurolinguistic primitives" for investigating how human languages are instantiated in the brain.
Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.
SASL) are more accessible than written or printed biblical texts for deaf-born South African people who use sign language as their first language. The study made use of the functionalist approach in translation to translate six parts from the Bible into ...
A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...
Sadlier, L.; van den Bogaerde, B.; Oyserman, P.; Tsagari, D.; Csepes, I.
This chapter explores the role of the Common European Framework of Reference for Languages (CEFR) in the context of teaching, learning, and more specifically, assessing signed languages. An exploration of various approaches used in selected universities across Europe provides perspectives on how the
Morris, Carla; Schneider, Erin
Following a year of study of Saudi Arabian Sign Language (SASL), we are documenting our findings to provide a grammatical sketch of the language. This paper represents one part of that endeavor and focuses on a description of selected morphemes, both manual and non-manual, that have appeared in the course of data collection. While some of the…
Details the influence of English on British Sign Language (BSL) at the syntactic, morphological, lexical, idiomatic, and phonological levels. Shows how BSL uses loan translations, fingerspellings, and the use of mouth patterns derived from English language spoken words to include elements from English. (Author/VWL)
The Zambian Experience. Vincent M. .... Target Users. The dictionary is primarily targeted at deaf schools and deaf school units but also at the clergy and government ministries such as the Ministry of Health, whose services are .... As explained above, the sign-word search system enables the dictionary user to identify the ...
Caselli, Naomi K.; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M.; Emmorey, Karen
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25–31 deaf signers, iconicity ratings from 21–37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org. PMID:27193158
Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .
Chun, Dorothy; Smith, Bryan; Kern, Richard
This article offers a capacious view of technology to suggest broad principles relating technology and language use, language teaching, and language learning. The first part of the article considers some of the ways that technological media influence contexts and forms of expression and communication. In the second part, a set of heuristic…
Singha, Joyeeta; Das, Karen
Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.
The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple
Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.
Rudner, Mary; Andin, Josefine; Rönnberg, Jerker; Heimann, Mikael; Hermansson, Anders; Nelson, Keith; Tjus, Tomas
The literacy skills of deaf children generally lag behind those of their hearing peers. The mechanisms of reading in deaf individuals are only just beginning to be unraveled but it seems that native language skills play an important role. In this study 12 deaf pupils (six in grades 1-2 and six in grades 4-6) at a Swedish state primary school for…
Baus, Cristina; Costa, Albert
This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated. Copyright © 2015 Elsevier B.V. All rights reserved.
Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary
This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal…
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Parton, Becky Sue
Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…
Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into
Miller, K R
Historically, the provision of sign language interpreters to deaf suspects, defendants, and offenders has been a problematic issue in the criminal justice system. Inconsistency in the provision of interpreter services results largely from the ignorance of criminal justice professionals regarding deaf people's communication needs and accommodation options. Through analysis of 22 post-Americans with Disabilities Act cases and a survey of 46 professional sign language interpreters working in criminal justice settings, the present study considered access issues concerning sign language interpreters in law enforcement, courtrooms, and correctional settings. Recommendations to increase the accessibility of interpreting services include providing ongoing awareness training to criminal justice personnel, developing training programs for deaf legal advocates, and continuing access studies.
Lillo-Martin, Diane C; Gajewski, Jon
Linguistic research has identified abstract properties that seem to be shared by all languages-such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language-in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. WIREs Cogn Sci 2014, 5:387-401. doi: 10.1002/wcs.1297 This article is categorized under: Linguistics > Linguistic Theory. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.
Marshall, Chloë R; Morgan, Gary
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.
Jones, T; Cumberbatch, K
The introduction of the landmark mandatory teaching of sign language to undergraduate dental students at the University of the West Indies (UWI), Mona Campus in Kingston, Jamaica, to bridge the communication gap between dentists and their patients is reviewed. A review of over 90 Doctor of Dental Surgery and Doctor of Dental Medicine curricula in North America, the United Kingdom, parts of Europe and Australia showed no inclusion of sign language in those curricula as a mandatory component. In Jamaica, the government's training school for dental auxiliaries served as the forerunner to the UWI's introduction of formal training of sign language in 2012. Outside of the UWI, a couple of dental schools have sign language courses, but none have a mandatory programme as the one at the UWI. Dentists the world over have had to rely on interpreters to sign with their deaf patients. The deaf in Jamaica have not appreciated the fact that dentists cannot sign and they have felt insulted and only go to the dentist in emergency situations. The mandatory inclusion of sign language in the Undergraduate Dental Programme curriculum at The University of the West Indies, Mona Campus, sought to establish a direct communication channel to formally bridge this gap. The programme of two sign language courses and a direct clinical competency requirement was developed during the second year of the first cohort of the newly introduced undergraduate dental programme through a collaborating partnership between two faculties on the Mona Campus. The programme was introduced in 2012 in the third year of the 5-year undergraduate dental programme. To date, two cohorts have completed the programme, and the preliminary findings from an ongoing clinical study have shown a positive impact on dental care access and dental treatment for deaf patients at the UWI Mona Dental Polyclinic. The development of a direct communication channel between dental students and the deaf that has led to increased dental
Lin, Li Li
Current technology provides new opportunities to increase the effectiveness of language learning and teaching. Incorporating well-organized and effective technology into second language learning and teaching for improving students' language proficiency has been refined by researchers and educators for many decades. Based on the rapidly changing…
Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary
This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.
Williams, Joshua T; Newman, Sharlene D
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Morgan, Gary; Herman, Rosalind; Barriere, Isabelle; Woll, Bencie
In the course of language development children must solve arbitrary form-to-meaning mappings, in which semantic components are encoded onto linguistic labels. Because sign languages describe motion and location of entities through iconic movements and placement of the hands in space, child signers may find spatial semantics-to-language mapping…
Sherman, Judy; Torres-Crespo, Marisel N.
Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.
Quinto-Pozos, David; Singleton, Jenny L.; Hauser, Peter C.
This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a…
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Many school systems mandate sight word mastery by their students, and this can be challenging for certain student populations. With a Professional Development School, college interns conducted an inquiry project with struggling first graders to learn required sight vocabulary. The inquiry project explored the use of American Sign Language to…
Knapp, Heather Patterson; Corina, David P
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.
Van Staden, Annalene
Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.
Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.
This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and
This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.
Lichtenauer, J.F.; Hendriks, E.A; Reinders, M.J.T.
To recognize speech, handwriting, or sign language, many hybrid approaches have been proposed that combine Dynamic Time Warping (DTW) or Hidden Markov Models (HMMs) with discriminative classifiers. However, all methods rely directly on the likelihood models of DTW/HMM. We hypothesize that time
nical words would lead to a wider utility of the system . Our contribution is the quantitative treatment of the problem of recognition of static gestures of Indian Sign. Language. We propose a vote-based feature combination approach for recognition. In the fingerspelling category of our dataset, overall 16 distinct alphabets (A, ...
Full Text Available This paper investigates how video playback can be introduced in DAISY by building on the DAISY 3 standard and on existing open-source software. The paper presents the process of creating and authoring a video-based sign language DAISY publication...
Kouremenos, Dimitris; Fotinea, Stavroula-Evita; Efthimiou, Eleni; Ntalianis, Klimis
In this article, a prototype Greek text to Greek Sign Language (GSL) conversion system is presented. The system is integrated into an educational platform that addresses the needs of teaching GSL grammar and was developed within the SYNENNOESE project (Efthimiou "et al." 2004a. Developing an e-learning platform for the Greek sign…
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
This article addresses the debate about the status of American Sign Language (ASL) as an example of ideological beliefs that impact linguistic judgments and policies. It also discusses the major challenges to the status of ASL with respect to formal legislative recognition, its utility as a medium of instruction, and its status as a legitimate…
Aura, Lillie Josephine; Venville, Grady; Marais, Ida
This paper presents results of an investigation into the relationship between Kenyan Sign Language (KSL) and English literacy skills. It is derived from research undertaken towards an MEd degree awarded by The University of Western Australia in 2011. The study employed a correlational survey strategy. Sixty upper primary deaf students from four…
Tomita, Nozomi; Kozak, Viola
This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…
Past studies have identified the function of SELF as a canonical reflexive pronoun in American Sign Language (ASL). This study examines the use of SELF with fifteen hours of naturalistic ASL discourse framed by the cognitive-functionalist approach. The analysis reveals that the category of SELF is expressed in three phonological forms and exhibits…
American Sign Language (ASL) began at Seminole Middle School in August 2007 as part of the program, D.E.C.A.L (Division of Communication and Law), the brainchild of principal, Dr. Kris Black. Her goal was to offer a program that would entice advanced middle school students from around Broward County to Seminole and the hook she used to entice them…
Rogers, K. Larry
The American Sign Language construction commonly known as "role-shift" (referred to afterward as Constructed Action) superficially resembles mimic forms, however unlike mime, Constructed Action is a type of depicting construction in ASL discourse (Roy 1989). The signer may use eye gaze, head shift, facial expression, stylistic variation,…
Ortega, Gerardo; Morgan, Gary
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…
Perniss, Pamela; Lu, Jenny C; Morgan, Gary; Vigliocco, Gabriella
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on iconicity in language, that is, resemblance relationships between form and meaning, and on non-ostensive contexts, where label and referent do not co-occur. We approach the question of language learning from the perspective of the language input. Specifically, we look at child-directed language (CDL) in British Sign Language (BSL), a language rich in iconicity due to the affordances of the visual modality. We ask whether child-directed signing exploits iconicity in the language by highlighting the similarity mapping between form and referent. We find that CDL modifications occur more often with iconic signs than with non-iconic signs. Crucially, for iconic signs, modifications are more frequent in non-ostensive contexts than in ostensive contexts. Furthermore, we find that pointing dominates in ostensive contexts, and suggest that caregivers adjust the semiotic resources recruited in CDL to context. These findings offer first evidence for a role of iconicity in the language input and suggest that iconicity may be involved in referential mapping and language learning, particularly in non-ostensive contexts. © 2017 John Wiley & Sons Ltd.
Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel
The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.
Bosworth, Rain G.; Emmorey, Karen
Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…
Davidson, Kathryn; Lillo-Martin, Diane; Chen Pichler, Deborah
Bilingualism is common throughout the world, and bilingual children regularly develop into fluently bilingual adults. In contrast, children with cochlear implants (CIs) are frequently encouraged to focus on a spoken language to the exclusion of sign language. Here, we investigate the spoken English language skills of 5 children with CIs who also have deaf signing parents, and so receive exposure to a full natural sign language (American Sign Language, ASL) from birth, in addition to spoken En...
Senghas, A; Coppola, M
It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.
Reading, Suzanne; Padgett, Robert J
This article describes a connection between service learning and American Sign Language (ASL) instruction. The Deaf community served as communication partners for university students, enabling them to use language skills in a natural setting. The rationale and implementation of pairing ASL with service learning are presented. A review of one study provides information about student perceptions of service learning, and a second study presents evidence about the development of ASL skills through a service learning experience. Service learning proved to be a valuable teaching method for ASL instruction, facilitating an increase in cultural awareness and ASL skills. Students' anecdotal evidence about service learning experiences indicated that they gained insights beyond just the improvement in language skills. The connection between service learning and ASL instruction is advantageous because students gained cultural understanding as well as language skills. This course design could be used at other institutions where a Deaf community is accessible.
MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.
Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill's criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths.
Current conceptions of human language include a gestural component in the communicative event. However, determining how the linguistic and gestural signals are distinguished, how each is structured, and how they interact still poses a challenge for the construction of a comprehensive model of language. This study attempts to advance our understanding of these issues with evidence from sign language. The study adopts McNeill’s criteria for distinguishing gestures from the linguistically organized signal, and provides a brief description of the linguistic organization of sign languages. Focusing on the subcategory of iconic gestures, the paper shows that signers create iconic gestures with the mouth, an articulator that acts symbiotically with the hands to complement the linguistic description of objects and events. A new distinction between the mimetic replica and the iconic symbol accounts for the nature and distribution of iconic mouth gestures and distinguishes them from mimetic uses of the mouth. Symbiotic symbolization by hand and mouth is a salient feature of human language, regardless of whether the primary linguistic modality is oral or manual. Speakers gesture with their hands, and signers gesture with their mouths. PMID:20445832
This research project aims to ease the process of Roadway Sign asset management. The project utilized handheld computer and global positioning system (GPS) technology to capture sign location data along with a timestamp. This data collection effort w...
Ирина Юрьевна Мишота
Full Text Available The article is devoted to the use of information technologies in the process of teaching foreign languages. On the basis of the retrospective application of information technologies considered the main directions of application of computers in the teaching of foreign languages.
Perniss, P.M.; Zwitserlood, I.E.P.; Özyürek, A.
The spatial affordances of the visual modality give rise to a high degree of similarity between sign languages in the spatial domain. This stands in contrast to the vast structural and semantic diversity in linguistic encoding of space found in spoken languages. However, the possibility and nature
L. Leeson; Dr. Beppie van den Bogaerde; Tobias Haug; C. Rathmann
This resource establishes European standards for sign languages for professional purposes in line with the Common European Framework of Reference for Languages (CEFR) and provides an overview of assessment descriptors and approaches. Drawing on preliminary work undertaken in adapting the CEFR to
Batterbury, Sarah C. E.
Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…
Hilger, Allison I.; Loucks, Torrey M. J.; Quinto-Pozos, David; Dye, Matthew W. G.
A study was conducted to examine production variability in American Sign Language (ASL) in order to gain insight into the development of motor control in a language produced in another modality. Production variability was characterized through the spatiotemporal index (STI), which represents production stability in whole utterances and is a…
bilingualism in the natural sign language and the dominant spoken language of the society. Students would study not only the common curriculum shared with their hearing peers, but would also study the history of the Deaf culture and Deaf communities in other parts of the world. Thus, the goal of such a programme would ...
Yang, Su; Zhu, Qing
The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.
Full Text Available In standard logical systems, quantifiers and variables are essential to express complex relations among objects. Natural language has expressions that have an analogous function: some noun phrases play the role of quantifiers (e.g. every man, and some pronouns play the role of variables (e.g. him, as in Every man likes people who admire him. Since the 1980’s, there has been a vibrant debate in linguistics about the way in which pronouns come to depend on their antecedents. According to one view, natural language is governed by a ‘dynamic’ logic which allows for dependencies that are far more flexible than those of standard (classical logic. According to a competing view, the treatment of variables in classical logic does not have to be fundamentally revised to be applied to natural language. While the debate centers around the nature of the formal links that connect pronouns to their antecedents, these links are not overtly expressed in spoken language, and the debate has remained open. In sign language, by contrast, the connection between pronouns and their antecedents is often made explicit by pointing. We argue that data from French and American Sign Language provide crucial evidence for the dynamic approach over one of its main classical competitors; and we explore further sign language data that can help choose among competing dynamic analyses.ReferencesBahan, B., Kegl, J., MacLaughlin, D. & Neidle, C. 1995. ‘Convergent Evidence for the Structure of Determiner Phrases in American Sign Language’. In L. Gabriele, D. Hardison & R. Westmoreland (eds. ‘FLSM VI, Proceedings of the Sixth Annual Meeting of the Formal Linguistics Society of Mid-America, Volume Two’, 1–12. Bloomington, IN: Indiana University Linguistics Club Publications.Brasoveanu, A. 2006. Structured Nominal and Modal Reference. Ph.D. thesis, Rutgers, The State University of New Jersey.Brasoveanu, A. 2010. ‘Decomposing Modal Quantification’. Journal of Semantics
Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.
Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor
This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…
Schembri, Adam; Wigglesworth, Gillian; Johnston, Trevor; Leigh, Greg; Adam, Robert; Barker, Roz
This article describes the adaptation of the Test Battery for American Sign Language Morphology and Syntax for Australian Sign Language. Data collected from a group of native signers who were deaf (n=25) demonstrate the range of variability in key grammatical features of Australian Sign Language and raise methodological issues. (Contains…
Woodward, James C.
Recent research has shown that sign language varieties in India and Pakistan are related. This report examines the possible relationship of sign language varieties in India and Pakistan to those in Nepal by analyzing comparative lexical data from sign language varieties in the three countries. (10 references) (VWL)
Greller, W. (2010). Language Technologies for Lifelong Learning. In S. Trausan-Matu & P. Dessus (Eds.), Proceedings of the Natural Language Processing in Support of Learning: Metrics, Feedback and Connectivity. Second Internationl Workshop - NLPSL 2010 (pp. 6-8). September, 14, 2010, Bucharest,
Samir Abou El-Seoud
Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.
Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy
This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region. PMID:24759673
Canada (Québec) Lengua de Señas Argentina Argentina Lengua de Señas Española Spain (except Catalonia) Lingua Gestual Portuguesa Portugal Lingua...clause initial or it may appear in both positions. In example (11) from Lengua de Señas Española (Spain), the question particle is transcribed as SI/NO...In the example from Finnish Sign Language (12), the question particle is transcribed as PALM-UP (Zeshan 2004): (11) Lengua de Señas Española
Baker, Stephanie E.; Idsardi, William J.; Golinkoff, Roberta Michnick; Petitto, Laura-Ann
Despite the constantly varying stream of sensory information that surrounds us, we humans can discern the small building blocks of words that constitute language (phonetic forms) and perceive them categorically (categorical perception, CP). Decades of controversy have prevailed regarding what is at the heart of CP, with many arguing that it is due to domain-general perceptual processing and others that it is determined by the existence of domain-specific linguistic processing. What is most key: perceptual or linguistic patterns? Here, we study whether CP occurs with soundless handshapes that are nonetheless phonetic in American Sign Language (ASL), in signers and nonsigners. Using innovative methods and analyses of identification and, crucially, discrimination tasks, we found that both groups separated the soundless handshapes into two classes perceptually but that only the ASL signers exhibited linguistic CP. These findings suggest that CP of linguistic stimuli is based on linguistic categorization, rather than on purely perceptual categorization. PMID:16383176
Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.
Schneider, Erin; Kozak, L. Viola; Santiago, Roberto; Stephen, Anika
Technological and language innovation often flow in concert with one another. Casual observation by researchers has shown that electronic communication memes, in the form of abbreviations, have found their way into spoken English. This study focuses on the current use of electronic modes of communication, such as cell smartphones, and e-mail, and…
An experiment was carried out to investigate a role of phonological short-term memory on vocabulary learning in Sign Language as a second language. The subjects, who have no experiences of Sign Language learning, were required to encode 24 Sign Language new words associating with their meaning (presented in Japanese words). A 2 X 2 X 3 factorial design was used: the first variable was with or without phonological concurrent task, the second was high- or low-imagery of Sign Language words (fro...
Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…
McKee, Michael M.; Paasche-Orlow, Michael; Winters, Paul C.; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas
Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, healthcare messages, and health care communication, which when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version of the NVS (ASL-NVS). Forty-eight percent of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research. PMID:26513036
McKee, Michael M; Paasche-Orlow, Michael K; Winters, Paul C; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas
Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, health care messages, and health care communication, which, when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version (ASL-NVS). We found that 48% of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research.
Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
McKee, Michael M; Winters, Paul C; Sen, Ananda; Zazove, Philip; Fiscella, Kevin
Deaf American Sign Language (ASL) users comprise a linguistic minority population with poor health care access due to communication barriers and low health literacy. Potentially, these health care barriers could increase Emergency Department (ED) use. To compare ED use between deaf and non-deaf patients. A retrospective cohort from medical records. The sample was derived from 400 randomly selected charts (200 deaf ASL users and 200 hearing English speakers) from an outpatient primary care health center with a high volume of deaf patients. Abstracted data included patient demographics, insurance, health behavior, and ED use in the past 36 months. Deaf patients were more likely to be never smokers and be insured through Medicaid. In an adjusted analysis, deaf individuals were significantly more likely to use the ED (odds ratio [OR], 1.97; 95% confidence interval [CI], 1.11-3.51) over the prior 36 months. Deaf American Sign Language users appear to be at greater odds for elevated ED utilization when compared to the general hearing population. Efforts to further understand the drivers for increased ED utilization among deaf ASL users are much needed. Copyright © 2015 Elsevier Inc. All rights reserved.
Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT) wants English language education to be more communicative. Japanese teachers of English (JTEs) need to adapt their instructional practices to meet this goal; however, they may not feel confident enough to teach speaking themselves. Using technology, JTEs have the ability…
Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia
The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…
Roush, Daniel R.
This article proposes an answer to the primary question of how the American Sign Language (ASL) community in the United States conceptualizes (im)politeness and its related notions. It begins with a review of evolving theoretical issues in research on (im)politeness and related methodological problems with studying (im)politeness in natural…
Rusher, Melissa Ausbrooks
This study provides a contemporary definition of American Sign Language/English bilingual education (AEBE) and outlines an essential theoretical framework. Included is a history and evolution of the methodology. The author also summarizes the general findings of twenty-six (26) empirical studies conducted in the United States that directly or…
Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.
Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…
Fox, Ashley Leann Chance
American Sign Language (ASL) is ranked fourth among heritage languages taken by students in the United States. The number of ASL classes offered at the K-12 and Institutions of Higher Education are on the rise, yet the number of certified ASL teachers remains stagnant. This study examines the reasons why American Sign Language teachers choose to…
Von Pein, Margreta; Altarriba, Jeanette
The present study was designed to investigate the ways in which notions of semantics and phonology are acquired by adult naive learners of American Sign Language (ASL) when they are first exposed to a set of simple signs. First, a set of ASL signs was tested for nontransparency and a set of signs was selected for subsequent use. Next, a set of…
Full Text Available Although in many respects sign languages have a similar structure to that of spoken languages, the different modalities in which both types of languages are expressed cause differences in structure as well. One of the most striking differences between spoken and sign languages is the influence of the interface between grammar and PF on the surface form of utterances. Spoken language words and phrases are in general characterized by sequential strings of sounds, morphemes and words, while in sign languages we find that many phonemes, morphemes, and even words are expressed simultaneously. A linguistic model should be able to account for the structures that occur in both spoken and sign languages. In this paper, I will discuss the morphological/ morphosyntactic structure of signs in Nederlandse Gebarentaal (Sign Language of the Netherlands, henceforth NGT, with special focus on the components ‘place of articulation’ and ‘handshape’. I will focus on their multiple functions in the grammar of NGT and argue that the framework of Distributed Morphology (DM, which accounts for word formation in spoken languages, is also suited to account for the formation of structures in sign languages. First I will introduce the phonological and morphological structure of NGT signs. Then, I will briefly outline the major characteristics of the DM framework. Finally, I will account for signs that have the same surface form but have a different morphological structure by means of that framework.
Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.
Schembri, Adam; Wigglesworth, Gillian; Johnston, Trevor; Leigh, Greg; Adam, Robert; Barker, Roz
In this article, we outline the initial stages in development of an assessment instrument for Australian Sign Language and explore issues involved in the development of such a test. We first briefly describe the instruments currently available for assessing grammatical skills in Australian Sign Language and discuss the need for a more objective measure. We then describe our adaptation of an existing American Sign Language test, the Test Battery for American Sign Language Morphology and Syntax. Finally, this article presents some of the data collected from a group of deaf native signers. These data are used to demonstrate the range of variability in key grammatical features of Australian Sign Language and to raise methodological issues associated with signed language test design.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon
When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.
Kristoffersen, Jette Hedegaard; Boye Niemela, Janne
The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...... mouth patterns should be described in the dictionary: That is either at the entry level or at the meaning level....
Witko, Joanne; Boyles, Pauline; Smiler, Kirsten; McKee, Rachel
The research described was undertaken as part of a Sub-Regional Disability Strategy 2017-2022 across the Wairarapa, Hutt Valley and Capital and Coast District Health Boards (DHBs). The aim was to investigate deaf New Zealand Sign Language (NZSL) users' quality of access to health services. Findings have formed the basis for developing a 'NZSL plan' for DHBs in the Wellington sub-region. Qualitative data was collected from 56 deaf participants and family members about their experiences of healthcare services via focus group, individual interviews and online survey, which were thematically analysed. Contextual perspective was gained from 57 healthcare professionals at five meetings. Two professionals were interviewed, and 65 staff responded to an online survey. A deaf steering group co-designed the framework and methods, and validated findings. Key issues reported across the health system include: inconsistent interpreter provision; lack of informed consent for treatment via communication in NZSL; limited access to general health information in NZSL and the reduced ability of deaf patients to understand and comply with treatment options. This problematic communication with NZSL users echoes international evidence and other documented local evidence for patients with limited English proficiency. Deaf NZSL users face multiple barriers to equitable healthcare, stemming from linguistic and educational factors and inaccessible service delivery. These need to be addressed through policy and training for healthcare personnel that enable effective systemic responses to NZSL users. Deaf participants emphasise that recognition of their identity as members of a language community is central to improving their healthcare experiences.
Vesel, J.; Hurdich, J.
TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.
Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta
Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.
Pfau, R.; Steinbach, M.
Studies on sign language grammaticalization have demonstrated that most of the attested diachronic changes from lexical to functional element parallel those previously described for spoken languages. To date, most of these studies are either descriptive in nature or embedded within
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy
This paper investigates phonological variation in British Sign Language (BSL) signs produced with a '1' hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed.
R, Elakkiya; K, Selvamani
Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.
Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.
Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.
Wolbers, Kimberly A.; Bowers, Lisa M.; Dostal, Hannah M.; Graham, Shannon C.
Language transfer theory elucidates how first language (L1) knowledge and grammatical features are applied in second language (L2) writing. Deaf and hard of hearing (d/hh) students who use or are developing American Sign Language (ASL) as their L1 may demonstrate the use of ASL linguistic features in their writing of English. In this study, we…
Weaver, Kimberly A.; Starner, Thad
Language immersion from birth is crucial to a child's language development. However, language immersion can be particularly challenging for hearing parents of deaf children to provide as they may have to overcome many difficulties while learning American Sign Language (ASL). We are in the process of creating a mobile application to help hearing…
Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.
Two experiments were designed to investigate a role of visuo-spatial short-term memory on vocabulary learning of Sign Language as a second language. The subjects having no experiences of Sign Language learning were required to encode Sign Language new words by associating with their meaning (presented in Japanese words) in both experiments. A2×2×3 factorial design was used in experiment 1: the first variable was with or without spatial concurrent task, the second was high- or low-imagery of t...
Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are
Shojaei, Abouzar; Motallebzadeh, Khalil
This book is a very helpful book which gives us information and knowledge of using technology in language learning and teaching. It contains detailed consideration to articulatory and auditory Language learning as well as to the practicalities of English language learning. The book discusses the relationship between English language learning and technology.
Shojaei, Abouzar; Motallebzadeh, Khalil
This book is a very helpful book which gives us information and knowledge of using technology in language learning and teaching. It contains detailed consideration to articulatory and auditory Language learning as well as to the practicalities of English language learning. The book discusses the relationship between English language learning and technology.
Newman, Aaron J.; Supalla, Ted; Hauser, Peter; Newport, Elissa; Bavelier, Daphne
Signed languages such as American Sign Language (ASL) are natural human languages that share all of the core properties of spoken human languages, but differ in the modality through which they are communicated. Neuroimaging and patient studies have suggested similar left hemisphere (LH)-dominant patterns of brain organization for signed and spoken languages, suggesting that the linguistic nature of the information, rather than modality, drives brain organization for language. However, the role of the right hemisphere (RH) in sign language has been less explored. In spoken languages, the RH supports the processing of numerous types of narrative-level information, including prosody, affect, facial expression, and discourse structure. In the present fMRI study, we contrasted the processing of ASL sentences that contained these types of narrative information with similar sentences without marked narrative cues. For all sentences, Deaf native signers showed robust bilateral activation of perisylvian language cortices, as well as the basal ganglia, medial frontal and medial temporal regions. However, RH activation in the inferior frontal gyrus and superior temporal sulcus was greater for sentences containing narrative devices, including areas involved in processing narrative content in spoken languages. These results provide additional support for the claim that all natural human languages rely on a core set of LH brain regions, and extend our knowledge to show that narrative linguistic functions typically associated with the RH in spoken languages are similarly organized in signed languages. PMID:20347996
Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Full Text Available The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words, in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2, compound words with one sign (CW-1, and compound words with two signs (CW-2, but not in derivational words with one sign (DW-1, with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Journal of Language, Technology & Entrepreneurship in Africa: Journal Sponsorship. Journal Home > About the Journal > Journal of Language, Technology & Entrepreneurship in Africa: Journal Sponsorship. Log in or Register to get access to full text downloads.
Aleksandra KAROVSKA RISTOVSKA
Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.
Werngren-Elgström, Monica; Brandt, Ase; Iwarsson, Susanne
The purpose of this study was to describe the everyday activities and social contacts among older deaf sign language users, and to investigate relationships between these phenomena and the health and well-being within this group. The study population comprised deaf sign language users, 65 years o...
Full Text Available and services. One such mechanism is by embedding animated Sign Language in Web pages. This paper analyses the effectiveness and appropriateness of using this approach by embedding South African Sign Language in the South African National Accessibility Portal...
Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,
This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born
Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.
Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and
Lutalo-Kiingi, Sam; De Clerck, Goedele A. M.
This article has been excerpted from "Perspectives on the Sign Language Factor in Sub-Saharan Africa: Challenges of Sustainability" (Lutalo-Kiingi and De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck and P. V. Paul (Eds.) 2016). In this…
Shaw, Emily P.
This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…
Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella
Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion…
Haug, Tobias; Herman, Rosalind; Woll, Bencie
This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…
Beal-Alvarez, Jennifer S.
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…
Casey, Shannon; Emmorey, Karen; Larrabee, Heather
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language…
Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…
Kushalnagar, Poorna; Naturale, Joan; Paludneviciene, Raylene; Smith, Scott R.; Werfel, Emily; Doolittle, Richard; Jacobs, Stephen; DeCaro, James
To date, there have been efforts towards creating better health information access for Deaf American Sign Language (ASL) users. However, the usability of websites with access to health information in ASL has not been evaluated. Our paper focuses on the usability of four health websites that include ASL videos. We seek to obtain ASL users’ perspectives on the navigation of these ASL-accessible websites, finding the health information that they needed, and perceived ease of understanding ASL video content. ASL users (N=32) were instructed to find specific information on four ASL-accessible websites, and answered questions related to: 1) navigation to find the task, 2) website usability, and 3) ease of understanding ASL video content for each of the four websites. Participants also gave feedback on what they would like to see in an ASL health library website, including the benefit of added captioning and/or signer model to medical illustration of health videos. Participants who had lower health literacy had greater difficulty in finding information on ASL-accessible health websites. This paper also describes the participants’ preferences for an ideal ASL-accessible health website, and concludes with a discussion on the role of accessible websites in promoting health literacy in ASL users. PMID:24901350
Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid
We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.
Dr. Margreet R. Luinge; Margot I. Visser-Bochane; Dr. C.P. van der Schans; Sijmen A. Reijneveld; W.P. Krijnen
Speech language disorders, which include speech sound disorders and language disorders, are common in early childhood. These problems, and in particular language problems, frequently go under diagnosed, because current screening instruments have no satisfying psychometric properties. Recent research
Motoyasu, Kyoko; Sato, Rie
The purpose of this investigation was to clarify how sign language is used as a way of communication among people with hearing impairments as compared to other ways such as the oral method, writing and phonetic sign. A questionnaire concerning frequency of using sign language and the degree of understanding, transmission, and satisfaction with its use was sent to 36 students of a junior and senior high school for the deaf-mute and 43 hearing impaired adults belonging to a communication suppor...
Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.
Full Text Available In this paper we compare five Finno-Ugric languages – Estonian,Finnish, Hungarian, Udmurt and Komi-Zyrian – and the Estonian Sign Language (unclassified in different aspects: established basic colour terms, the proportion of basic colour terms and different colour terms in the collected word-corpora, the cognitive salience index values in the list task and the number of dominant colour tiles in the colour naming task. The data was collected, using the field method of Davies and Corbett, from all languages under consideration, providing a distinctive foundation for linguistic comparison. We argue that Finno-Ugric languages seem to possess relatively large colour vocabularies,especially due to their rich variety of word-formation types,e.g. the composition of compound words. All of the languages under consideration have developed to Stage VI or VII, possessing 7 to 11lexicalised basic colour terms. The cognitive salience index helps to distinguish primary and secondary basic colour terms, showing certain comprehensive patterns which are similar to Russian and English.
Quinto-Pozos, David; Singleton, Jenny L; Hauser, Peter C
This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3.5-year period (15;10-19;6) during which we collected linguistic and neuropsychological data. Results revealed that this student has average intelligence, intact visual perceptual skills, visuospatial skills, and motor skills but demonstrates challenges with some memory and sequential processing tasks. Scores from ASL testing signaled language impairment and marked difficulty with fingerspelling. The student also had significant deficits in English vocabulary, spelling, reading comprehension, reading fluency, and writing. Accepted SLI diagnostic criteria exclude deaf individuals from an SLI diagnosis, but the authors propose modified criteria in this work. The results of this study have practical implications for professionals including school psychologists, speech language pathologists, and ASL specialists. The results also support the theoretical argument that SLI can be evident regardless of the modality in which it is communicated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte
The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.
Halim, Zahid; Abbas, Ghulam
Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.
Full text: The IAEA and the International Science and Technology Center (ISTC) today signed an agreement that calls for an increase in cooperation between the two organizations. The memorandum of understanding seeks to amplify their collaboration in the research and development of applications and technology that could contribute to the IAEA's activities in the fields of verification and nuclear security, including training and capacity building. IAEA Safeguards Director of Technical Support Nikolay Khlebnikov and ISTC Executive Director Adriaan van der Meer signed the Agreement at IAEA headquarters in Vienna on 22 October 2008. (IAEA)
Beal-Alvarez, Jennifer S; Figueroa, Daileen M
Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks, including animals, foods, and specific handshapes. We investigated the effects of chronological age, age of sign language acquisition/years at school site, gender, presence of a disability, and geographical location (i.e., USA and Puerto Rico) on participants' performance and relations among tasks. In general, the phonological task appeared more difficult than the semantic tasks, students generated more animals than foods, age, and semantic performance correlated for the larger sample of U.S. students, and geographical variation included use of fingerspelling and specific signs. Compared to their peers, deaf students with disabilities generated fewer semantic items. These results provide an initial snapshot of students' semantic and phonological sign generation. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Visser-Bochane, Margot I; Gerrits, Ellen; van der Schans, Cees P; Reijneveld, Sijmen A; Luinge, Margreet R
Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for atypical speech-language development in children from 1 to 6 years of age. A two-round Delphi study in the Netherlands with a national expert panel (n = 24) of speech and language therapists was conducted. The panel members responded to web-based questionnaires addressing clinical signs. Consensus was defined as ≥ 70% of the experts agreeing on an issue. The first round resulted in a list of 161 characteristics of atypical speech and language development. The second round led to agreement on 124 clinical signs and 34 red flags. Dutch national consensus concerns 17-23 clinical signs per age year for the description of an atypical speech-language development in young children and three to 10 characteristics per age year being red flags for atypical speech-language development. This consensus contributes to early identification and diagnosis of children with atypical speech-language development, awareness and research. © 2016 Royal College of Speech and Language Therapists.
Heiman, Erica; Haynes, Sharon; McKee, Michael
Background Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. Objective We sought to characterize the self-reported sexual behaviors of Deaf individuals. Methods Responses from 282 Deaf participants aged 18–64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N=1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Results Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs. 58.1%), lower-income Deaf (44.4% vs. 69.7%) and among less educated Deaf (31.3% vs. 57.7%) than among respondents from corresponding general population groups. Conclusion Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower-income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. PMID:26242551
Heiman, Erica; Haynes, Sharon; McKee, Michael
Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. We sought to characterize the self-reported sexual behaviors of Deaf individuals. Responses from 282 Deaf participants aged 18-64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N = 1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs 58.1%), lower-income Deaf (44.4% vs 69.7%) and among less educated Deaf (31.3% vs 57.7%) than among respondents from corresponding general population groups. Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. Copyright © 2015 Elsevier Inc. All rights reserved.
Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.
A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…
Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa
Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.
Memiş, Abbas; Albayrak, Songül
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
Crume, Peter K
The National Reading Panel emphasizes that spoken language phonological awareness (PA) developed at home and school can lead to improvements in reading performance in young children. However, research indicates that many deaf children are good readers even though they have limited spoken language PA. Is it possible that some deaf students benefit from teachers who promote sign language PA instead? The purpose of this qualitative study is to examine teachers' beliefs and instructional practices related to sign language PA. A thematic analysis is conducted on 10 participant interviews at an ASL/English bilingual school for the deaf to understand their views and instructional practices. The findings reveal that the participants had strong beliefs in developing students' structural knowledge of signs and used a variety of instructional strategies to build students' knowledge of sign structures in order to promote their language and literacy skills.
Caporali, Sueli Aparecida; de Lacerda, Cristina Broglia Feitosa; Marques, Penélope Leme
According to bilingual education, only through sign language will deaf children attain linguistic and cognitive development, enabling them to learn a second language--spoken or written. However, it is also necessary for families to learn sign language in order to have a more efficient communication. To analyze methodological aspects of the teaching-learning process of Sign Language to family groups. Transcription and analysis of video recordings were made. The practice of teaching of the deaf teacher modifies itself during the research period and his attitude influences the way by which parents participate. The teaching methodology used by the deaf teacher interferes significantly in the motivation/participation of parents, followed by the acceptance of deafness and sign language.
This article describes the technologies in use for second language learning, in relation to the major language areas and skills. In order, these are grammar, vocabulary, reading, writing, pronunciation, listening, speaking, and culture. With each language area or skill, the relevant technologies are discussed with examples that illustrate how…
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Leonard, Matthew K; Ferjan Ramirez, Naja; Torres, Christina; Hatrak, Marla; Mayberry, Rachel I; Halgren, Eric
WE COMBINED MAGNETOENCEPHALOGRAPHY (MEG) AND MAGNETIC RESONANCE IMAGING (MRI) TO EXAMINE HOW SENSORY MODALITY, LANGUAGE TYPE, AND LANGUAGE PROFICIENCY INTERACT DURING TWO FUNDAMENTAL STAGES OF WORD PROCESSING: (1) an early word encoding stage, and (2) a later supramodal lexico-semantic stage. Adult native English speakers who were learning American Sign Language (ASL) performed a semantic task for spoken and written English words, and ASL signs. During the early time window, written words evoked responses in left ventral occipitotemporal cortex, and spoken words in left superior temporal cortex. Signed words evoked activity in right intraparietal sulcus that was marginally greater than for written words. During the later time window, all three types of words showed significant activity in the classical left fronto-temporal language network, the first demonstration of such activity in individuals with so little second language (L2) instruction in sign. In addition, a dissociation between semantic congruity effects and overall MEG response magnitude for ASL responses suggested shallower and more effortful processing, presumably reflecting novice L2 learning. Consistent with previous research on non-dominant language processing in spoken languages, the L2 ASL learners also showed recruitment of right hemisphere and lateral occipital cortex. These results demonstrate that late lexico-semantic processing utilizes a common substrate, independent of modality, and that proficiency effects in sign language are comparable to those in spoken language.
Dores, Paul A.; Carr, Edward G.
Six nonverbal, autistic boys (ages 6 to 11) were studied to assess what was learned when signs and spoken words were presented simultaneously. The boys were taught to discriminate among several available objects when given commands consisting of simultaneously signed and spoken object labels. Each of the six children mastered all of the…
Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.
Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.
Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam
Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.
Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡
Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.
Carr, Edward G.; And Others
Four nonverbal autistic boys (ages 11-16) were successfully taught sign language action-object phrases following an intervention composed of prompting, fading, stimulus rotation, and differential reinforcement. The skill generalized to new situations. (Author/DB)
, particularly sign language users, in HIV-prevention programmes. Keywords: communication, disability, disability studies, hearing impairment, qualitative research, scoping study. African Journal of AIDS Research 2010, 9(3): 307–313 ...
The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…
No formal Canadian curriculum presently exists for teaching American Sign Language (ASL) as a second language to parents of deaf and hard of hearing children. However, this group of ASL learners is in need of more comprehensive, research-based support, given the rapid expansion in Canada of universal neonatal hearing screening and the…
Johnson, William L
...; interpreting style, such as poor body posture, tensing muscles, signing too forcefully; job control, including the emotional and physical stress of the job, being overworked, and disliking the job...
Marschark, Marc; And Others
Examines the effects of age on hearing children's oral rather than written story production and whether there are age-related changes in the signed productions of deaf children comparable to those observed in hearing age-mates. (HOD)
Carr, Edward G.; And Others
Four nonverbal autistic children (10-15 years old) were taught expressive sign labels for common objects, using a training procedure that consisted of prompting, fading, and stimulus rotation. (Author/BD)
This complexity may result from the fact that "...translators build bridges not only between languages but between differences of two cultures.... Each language is a way of seeing and reflecting the delicate nuances of cultural perceptions, and it is the translator who not only reconstructs the equivalences of the words across ...
Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lederer, Susan Hendler
Teaching young children with language delays to say or sign the word "more" has had strong support from the literature since the 1970s (Bloom & Lahey, 1978; Holland, 1975; Lahey & Bloom, 1977; Lederer, 2002). Semantically, teaching children the word/sign "more" is supported by research on early vocabulary development…
Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne
The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…
Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia
The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf…
Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.
Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…
Parton, Becky Sue
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based…
Guimaraes, Cayley; Antunes, Diego R.; de F. Guilhermino Trindade, Daniela; da Silva, Rafaella A. Lopes; Garcia, Laura Sanchez
This work presents a computational model (XML) of the Brazilian Sign Language (Libras), based on its phonology. The model was used to create a sample of representative signs to aid the recording of a base of videos whose aim is to support the development of tools to support genuine social inclusion of the deaf.
Perniss, Pamela; Özyürek, Asli; Morgan, Gary
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.
... a clearer picture on whish approach or combination of approaches, would lead to optimal results. What seems optimal to English and other Indo-European languages is not necessarily suitable to Kiswahili and other African languages. This paper presents current trends in text-based language technology of Kiswahili and ...
van den Bogaerde, B.; de Lange, R.; Nicodemus, B.; Metzger, M.
In healthcare, the accuracy of interpretation is the most critical component of safe and effective communication between providers and patients in medical settings characterized by language and cultural barriers. Although medical education should prepare healthcare providers for common issues they
LANGUAGE. Introduction. Debra Aarons, University of Stell en bosch, South Africa. Ruth Morgan, University of the Witwatersrand, South Africa. In this paper 1 we examine two aspects of ... the use of classifier predicates in South Afiican Sign Language (henceforth SASL). .... In this case, DOG is the topic (and the subject).
De Meulder, Maartje
This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…
Khokhlova A. Yu.
Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language…
Almeida, Diogo; Poeppel, David; Corina, David
ABSTRACT The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality. PMID:27135041
Fuks, Orit; Tobin, Yishai
The purpose of the present research is to examine which of the two factors: (1) the iconic-semiotic factor; or (2) the human-phonetic factor is more relevant in explaining the appearance and distribution of the hand shape B-bent in Israeli Sign Language (ISL). The B-bent shape has been the subject of much attention in sign language research revolving around the question of its status as a phoneme. The arguments supporting the phonemic status of the B-bent hand shape have been primarily based on the semiotic opposition between the hand shape B and the hand shape B-bent. It has been claimed that in Italian Sign Language the hand shape B is perceptually distinct from the hand shape B-bent, i.e. in opposition to the general, neutral, unmarked meaning of the hand shape B, the iconic hand shape B-bent has a more narrow, specific and marked meaning: DELIMIT. The B-bent hand shape appears in spatial-temporal signs such as "a little before, ahead, postpone or behind". In these signs the iconic structure of the hand shape B-bent is utilized to mark borders in space and time. The arguments opposing the perceptual/phonemic distinction between these hand shapes is based on the human-phonetic factor, i.e. the need to reduce the effort on the part of the wrist joints in specific phonetic environments. We performed a quantitative and qualitative content analysis of the distribution of the basic units of 560 lexical signs taken from a stratified random sample from the ISL dictionary. The results were analyzed in the framework of the sign-oriented linguistic theory of the Columbia School including the theory of Phonology as Human Behavior. Our data revealed that the B-bent hand shape--as all the "building blocks" of the ISL--is a morpho-phonemic unit. We found that there is not only a phonemic distinction between hand shape B and hand shape B-bent in ISL (based on minimal pairs), but there is also a perceptual distinction between them. The qualitative analysis shows that the
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.
Corina, David P; Lawyer, Laurel A; Cates, Deborah
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
Full Text Available Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language,; core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language.
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
Kazzemi, Akram; Narafshan, Mehry Haddad
This paper is a try to investigate the attitudes of English language university teachers in Kerman (Iran) toward computer technology and find the hidden factors that make university teachers avoid using technology in English language teaching. 30 university teachers participated in this study. A questionnaire and semi-structured interview were…
Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Grove, Nicola; Woll, Bencie
Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Languages are composed of a conventionalized system of parts which allow speakers and signers to generate an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: "how do I make a language with combinatorial structure"? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Hermes, Mary; King, Kendall A.
Although Indigenous language loss and revitalization are not new topics of academic work nor new areas of community activism (e.g., King, 2001; Grenoble & Whaley, 2006), increased attention has been paid in recent years to the ways that new technology can support efforts to teach and renew endangered languages such as Ojibwe. However, much of…
Cooper, Sheryl B; Reisman, Joel I; Watson, Douglas
Surveys of sign language programs in institutions of higher education in the United States, conducted in 1994 and 2004, are compared to reveal changes over time. Data are presented concerning the institutional environment of programs, program administrators, and instructors. Institutions examined in 2004 were on average 5 years older than those examined in 1994. More institutions accepted sign language for general education and foreign language requirements. Program administrators in 2004 were more likely to have primary duties as teachers rather than administrators, and to have greater understanding of the subject matter. Faculty in 2004 had more education and teaching experience. Full-time faculty showed increases in the proportion who were Deaf and the proportion who were in tenure-track positions. Program staff size increased. Overall, evidence indicates that sign language has become more accepted as an academic discipline and that programs are more entrenched at their institutions.
Full Text Available We describe here the characteristics of a very frequently-occurring ASL indefinite focus particle, which has not previously been recognized as such. We show here that, despite its similarity to the question sign "WHAT", the particle is distinct from that sign in terms of articulation, function, and distribution. The particle serves to express "uncertainty" in various ways, which can be formalized semantically in terms of a domain-widening effect of the same sort as that proposed for English "any" by Kadmon & Landman (1993. Its function is to widen the domain of possibilities under consideration from the typical to include the non-typical as well, along a dimension appropriate in the context.
Full Text Available An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015 revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that
Full Text Available Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL. Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality.We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and ten second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words and spatial (use of signing space devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.
Kocab, Annemarie; Pyers, Jennie; Senghas, Ann
Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.
Janke, Vikki; Marshall, Chloë R
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the
Clark, M. Diane; Hauser, Peter C.; Miller, Paul; Kargin, Tevhide; Rathmann, Christian; Guldenoglu, Birkan; Kubus, Okan; Spurgeon, Erin; Israel, Erica
Researchers have used various theories to explain deaf individuals' reading skills, including the dual route reading theory, the orthographic depth theory, and the early language access theory. This study tested 4 groups of children--hearing with dyslexia, hearing without dyslexia, deaf early signers, and deaf late signers (N = 857)--from 4…
Ciaramello, Francis M.; Hemami, Sheila S.
For members of the Deaf Community in the United States, current communication tools include TTY/TTD services, video relay services, and text-based communication. With the growth of cellular technology, mobile sign language conversations are becoming a possibility. Proper coding techniques must be employed to compress American Sign Language (ASL) video for low-rate transmission while maintaining the quality of the conversation. In order to evaluate these techniques, an appropriate quality metric is needed. This paper demonstrates that traditional video quality metrics, such as PSNR, fail to predict subjective intelligibility scores. By considering the unique structure of ASL video, an appropriate objective metric is developed. Face and hand segmentation is performed using skin-color detection techniques. The distortions in the face and hand regions are optimally weighted and pooled across all frames to create an objective intelligibility score for a distorted sequence. The objective intelligibility metric performs significantly better than PSNR in terms of correlation with subjective responses.
Hearing native signers often learn sign language as their first language and acquire features that are characteristic of sign languages but are not present in equivalent ways in English (e.g., grammatical facial expressions and the structured use of space for setting up tokens and surrogates). Previous research has indicated that bimodal…
Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert
Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.
Henner, Jon; Caldwell-Harris, Catherine L.; Novogrodsky, Rama; Hoffmeister, Robert
Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6–18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age. PMID:28082932
Full Text Available This work, which is inserted in the research line of Translation and Terminology, presented as an object of study the basic terms used in political and educational discourses that permeate national conference events. In respect to the Law 10436/2002 and Decree 5626/2005 is right for the Deaf have access to information in Brazilian Sign Language - Libras. One way to ensure this right is the presence of translator and interpreter to act in areas with specialized subjects should retain the knowledge of the specific terminology used in different contexts. The current study is based on the methodology for the preparation of dictionaries and glossaries Faulstich (1995. The research follows the approach of Socioterminology and as following: i recognition and identification of the target audience; ii delimitation of the surveyed area; iii collection and organization of data; iv organization glossary and validity test. The search result is the presentation of a proposal for entry of organizing a Terminology Glossary Bilingual facing the conference area that can serve as a reference source and training of translators and interpreters who work in the national conference events.
In standard logical systems, quantifiers and variables are essential to express complex relations among objects. Natural language has expressions that have an analogous function: some noun phrases play the role of quantifiers (e.g. every man), and some pronouns play the role of variables (e.g. him, as in Every man likes people who admire him). Since the 1980’s, there has been a vibrant debate in linguistics about the way in which pronouns come to depend on their antecedents. According to one ...
Are these students talking about their classmates? No, they are describing the Signing Avatar characters--3-D figures who appear on the EnViSci Network Web site and sign the resources and activities in American Sign Language (ASL) or Signed English (SE). During the 2003?04 school year, students in schools for the deaf and hard of hearing…
Beal-Alvarez, Jennifer S
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and were still developing across this age range. Students' expressive skills, specifically classifier production, increased with age but did not approach adult-like performance. On both measures, deaf children with deaf parents scored higher than their peers with hearing parents and many components of the measures significantly correlated. These results suggest that these two measures provide a well-rounded snapshot of individual students' American Sign Language skills. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Casey, Shannon; Emmorey, Karen; Larrabee, Heather
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
CASEY, SHANNON; EMMOREY, KAREN; LARRABEE, HEATHER
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture. PMID:23335853
Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru
A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.
A. A. Karpov
Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.
Beal-Alvarez, Jennifer S; Scheetz, Nanci A
In deaf education , the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and expressive narrative sign language skills of 10 teacher and interpreter candidates in a university preparation program. The candidates evaluated signed renditions of two signing children, as well as their own expressive renditions, using the Signed Reading Fluency Rubric (Easterbrooks & Huston, 2008) at the completion of their fifth sign language course. Candidates' evaluations were compared overall and across 12 sign language indicators to ratings of two university program professors. Some variation existed across ratings for individual indicators, but generally the candidates were aware of and could accurately rate their own abilities and those of two signing children.
Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng
Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.
Hall, Wyatte C
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
Luiz Daniel Rodrigues Dinarte
Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.
Luiz Daniel Rodrigues Dinarte
Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.
Vargas, Lorena P; Barba, Leiner; Torres, C O; Mattos, L
This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.
Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Cannon, Joanna E.; Fredrick, Laura D.; Easterbrooks, Susan R.
Reading to children improves vocabulary acquisition through incidental exposure, and it is a best practice for parents and teachers of children who can hear. Children who are deaf or hard of hearing are at risk for not learning vocabulary as such. This article describes a procedure for using books read on DVD in American Sign Language with…
We are living in a time with unprecedented opportunities to communicate with others in authentic and compelling linguistically and culturally contextualized domains. In fact, language teachers today are faced with so many fascinating options for using technology to enhance language learning that it can be overwhelming. Even for those who are…
Hilmarsson-Dunn, Amanda M.; Kristinsson, Ari P.
Iceland's language policies are purist and protectionist, aiming to maintain the grammatical system and basic vocabulary of Icelandic as it has been for a thousand years and to keep the language free of foreign (English) borrowings. In order to use Icelandic in the domain of information technology, there has been a major investment in language…
Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik
There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen's kappa and "observed agreement". There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and "observed agreement" was above 88%. The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language.
Full Text Available The paper examines one of the possible approaches to exploring the conceptual space represented by language signs and texts. The notion of the cognitheme as a unit of knowledge in the form of a proposition, functional for modelling the conceptual space, is defined and some principles of the cognitheme analysis are discussed. The cognitheme is considered as a unit of modelling mental entities reflected in the language, for example, such as the concept or the conceptual space connected with a text, and at the same time as a unit of conceptualization significant in its own right, revealing elements of knowledge important for a language community and thus fixed in language signs and texts. A feasible classification of cognithemes is described, examples illustrating this classification are given.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 2 (2010) >. Log in or Register to get access to full text downloads.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 7, No 1 (2016) >. Log in or Register to get access to full text downloads.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 1, No 1 (2007) >. Log in or Register to get access to full text downloads.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 2, No 1 (2010) >. Log in or Register to get access to full text downloads.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 5, No 2 (2014) >. Log in or Register to get access to full text downloads.
Journal of Language, Technology & Entrepreneurship in Africa. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 3, No 1 (2011) >. Log in or Register to get access to full text downloads.
range of topics including language, technology, entrepreneurship, finance and communication. It is meant to promote dialogue across disciplines by emphasizing the interconnectedness of knowledge. It is ideal for scholars eager to venture into ...
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Silvia Teresinha Frizzarini
Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.
Jovanov, Jane; Kirova, Snezana
Do modern technologies allow as to advance the teaching process in studying foreign languages? We can already say with assurance that these technologies allow us twice as fast a pace of teaching thematic units. The application of modern software solutions in our teaching guarantees this with compatible hardware support for the promotion of those same software packages. Modeling and imitating original situations additionally enable us to recapture the originality of a language environment, cul...
Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy
This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
This article looks into the concept of learner autonomy in the context of the use of new technologies in foreign language learning and teaching; outlines the possibilities new technologies offer for language learning and language use; discusses the concepts of learner autonomy, mature language learner and productive language learning; highlights the challenges of developing learner autonomy in language education.
Tiago Hermano Breunig
Full Text Available When inquiring the sign “?”, Flusser postulates that meaning is “one of the main problems of the present times thought.” From the sign above, Flusser differentiates meaning and sense, which defines as “what means”. Thus, the problem of meaning converges with the problem of thought itself, since, according to Flusser, all thoughts come from a tautology, i.e., what “means nothing”. If the understanding of meaning implies the musical aspects of the language, as the sign “?”, according to Flusser, music falls “in the same abyss of tautology” as it overcomes the language limit. Flusser believes that the discussion of language limits contributes to the problem of the meaning of music and confesses that among all the existential signs the “?” is the one that articulates better the situation in which we are. It is in this sense, in this “Stimmung”, as Flusser says about the meaning of the sign “?”, that this paper aims to reflect, from the problem of meaning, on the relationship between music and poetry contemporary to Flusser.
Garcia-Bautista, G.; Trujillo-Romero, F.; Diaz-Gonzalez, G.
Sign Language (SL) is the basic alternative communication method between deaf people. However, most of the hearing people have trouble understanding the SL, making communication with deaf people almost impossible and taking them apart from daily activities. In this work we present an automatic basic real-time sign language translator capable of recognize a basic list of Mexican Sign Language (MSL) signs of 10 meaningful words, letters (A-Z) and numbers (1-10) and translate them into speech and text. The signs were collected from a group of 35 MSL signers executed in front of a Microsoft Kinect™ Sensor. The hand gesture recognition system use the RGB-D camera to build and storage data point clouds, color and skeleton tracking information. In this work we propose a method to obtain the representative hand trajectory pattern information. We use Euclidean Segmentation method to obtain the hand shape and Hierarchical Centroid as feature extraction method for images of numbers and letters. A pattern recognition method based on a Back Propagation Artificial Neural Network (ANN) is used to interpret the hand gestures. Finally, we use K-Fold Cross Validation method for training and testing stages. Our results achieve an accuracy of 95.71% on words, 98.57% on numbers and 79.71% on letters. In addition, an interactive user interface was designed to present the results in voice and text format.
Wijkamp, I.; Gerritsen, B.; Bonder, F.; Haisma, H.H.; van der Schans, C.P
In the Netherlands, many educators and care providers working at special schools for children with severe speech and language impairments (SSLI) use sign-supported Dutch (SSD) to facilitate communication. Anecdotal experiences suggest positive results, but empirical evidence is lacking. In this
This article draws on data from the 2006 Australian census to explore the education and employment outcomes of sign languages users living in Victoria, Australia, and to compare them with outcomes reported in the general population. Census data have the advantage of sampling the entire population on the one night, avoiding problems of population…
People with hearing impairment may have difficulty accessing information about HIV/AIDS, especially those who use sign language. Because adolescence is characterised by sexual maturation, it is important to gauge levels of HIV/AIDS awareness and knowledge in this age group. For this scoping study, we interviewed ...
MacKinnon, Gregory; Soutar, Iris
The Jamaican Association for the Deaf, in their responsibilities to oversee education for individuals who are deaf in Jamaica, has demonstrated an urgent need for a dictionary that assists students, educators, and parents with the practical use of "Jamaican Sign Language." While paper versions of a preliminary resource have been explored…
Golos, Debbie B.
Over time children's educational television has successfully modified programming to incorporate research-based strategies to facilitate learning and engagement during viewing. However, research has been limited on whether these same strategies would work with preschool deaf children viewing videos in American Sign Language. In a descriptive…
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly
This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…
Fajardo, Inmaculada; Parra, Elena; Canas, Jose J.
The efficacy of video-based sign language (SL) navigation aids to improve Web search for Deaf Signers was tested by two experiments. Experiment 1 compared 2 navigation aids based on text hyperlinks linked to embedded SL videos, which differed in the spatial contiguity between the text hyperlink and SL video (contiguous vs. distant). Deaf Signers'…
Beal-Alvarez, Jennifer S.
This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…
Hardin, Belinda J.; Blanchard, Sheresa Boone; Kemmery, Megan A.; Appenzeller, Margo; Parker, Samuel D.
Families with children who are deaf face many important decisions, especially the mode(s) of communication their children will use. The purpose of this focus group study was to better understand the experiences and recommendations of families who chose American Sign Language (ASL) as their primary mode of communication and to identify strategies…
Bochner, Joseph H.; Christie, Karen; Hauser, Peter C.; Searls, J. Matt
Learners' ability to recognize linguistic contrasts in American Sign Language (ASL) was investigated using a paired-comparison discrimination task. Minimal pairs containing contrasts in five linguistic categories (i.e., the formational parameters of movement, handshape, orientation, and location in ASL phonology, and a category comprised of…
Novogrodsky, Rama; Henner, Jon; Caldwell-Harris, Catherine; Hoffmeister, Robert
Factors influencing native and nonnative signers' syntactic judgment ability in American Sign Language (ASL) were explored for 421 deaf students aged 7;6-18;5. Predictors for syntactic knowledge were chronological age, age of entering a school for the deaf, gender, and additional learning disabilities. Mixed-effects linear modeling analysis…
Brentari, Diane; Coppola, Marie; Jung, Ashley; Goldin-Meadow, Susan
Handshape works differently in nouns versus a class of verbs in American Sign Language (ASL) and thus can serve as a cue to distinguish between these two word classes. Handshapes representing characteristics of the object itself ("object" handshapes) and handshapes representing how the object is handled ("handling" handshapes)…
Lapinsky, Jessica; Colonna, Caitlin; Sexton, Patricia; Richard, Mariah
The study examined the effectiveness of a workshop on Deaf culture and basic medical American Sign Language for increasing osteopathic student physicians' confidence and knowledge when interacting with ASL-using patients. Students completed a pretest in which they provided basic demographic information, rated their confidence levels, took a video…
This article briefly traces the historical conceptualization of linguistic and cultural immersion through technological applications, from the early days of locally networked computers to the cutting-edge technologies known as virtual reality and augmented reality. Next, the article explores the challenges of immersive technologies for the field…
Wright, Courtney A; Kaiser, Ann P; Reikowsky, Dawn I; Roberts, Megan Y
In this study, the authors evaluated the effects of Enhanced Milieu Teaching (EMT; Hancock & Kaiser, 2006) blended with Joint Attention, Symbolic Play, and Emotional Regulation (JASPER; Kasari, Freeman, & Paparella, 2006) to teach spoken words and manual signs (Words + Signs) to young children with Down syndrome (DS). Four toddlers (ages 23-29 months) with DS were enrolled in a study with a multiple-baseline, across-participants design. Following baseline, 20 play-based treatment sessions (20-30 min each) occurred twice weekly. Spoken words and manual signs were modeled and prompted by a therapist who used EMT/JASPER teaching strategies. The authors assessed generalization to interactions with parents at home. There was a functional relation between the therapist's implementation of EMT/JASPER Words + Signs and all 4 children's use of signs during the intervention. Gradual increases in children's use of spoken words occurred, but there was not a clear functional relation. All children generalized their use of signs to their parents at home. The infusion of manual signs with verbal models within a framework of play, joint attention, and naturalistic language teaching appears to facilitate development of expressive sign and word communication in young children with DS.
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Woodward, James; Hoa, Nguyen Thi
This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…
Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…
Full Text Available Oroo’ is a language of nomadic Penans in the rainforests of Borneo and the only way of asynchronous communication between nomadic groups in the forest journey. Like many other indigenous languages, the Oroo’ language is also facing imminent extinction. In this paper, we present the research process and reflections of a multidisciplinary community-based research project on digitalizing and preserving the Oroo’ sign language. As a methodology for project activities, we are employing Participatory Action Research in Software Development Methodology Augmentation (PRISMA. Preliminary results show a general interest in digital contents and a positive impact of the project activities. In this paper, we present scenario of a research project that is retooled to fit the need of communities, informing language revitalization efforts and assisting with the evolution of community-based research design.
Full Text Available Human language technologies (HLT) can play a vital role in bridging the digital divide and thus the HLT field has been recognised as a priority area by the South African government. The authors present the work on conducting a technology audit...
Provides an overview of the theoretical arguments and problems encountered in the implementation of information technology in Chinese language teaching. States there is a belief that teaching and learning can be enhanced with the introduction of information technology, explaining that it may increase students' motivation to learn. (CMK)
Rempel, David; Camilleri, Matt J; Lee, David L
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input.
Rempel, David; Camilleri, Matt J.; Lee, David L.
The design and selection of 3D modeled hand gestures for human-computer interaction should follow principles of natural language combined with the need to optimize gesture contrast and recognition. The selection should also consider the discomfort and fatigue associated with distinct hand postures and motions, especially for common commands. Sign language interpreters have extensive and unique experience forming hand gestures and many suffer from hand pain while gesturing. Professional sign language interpreters (N=24) rated discomfort for hand gestures associated with 47 characters and words and 33 hand postures. Clear associations of discomfort with hand postures were identified. In a nominal logistic regression model, high discomfort was associated with gestures requiring a flexed wrist, discordant adjacent fingers, or extended fingers. These and other findings should be considered in the design of hand gestures to optimize the relationship between human cognitive and physical processes and computer gesture recognition systems for human-computer input. PMID:26028955
Ramesh Mahadev Kagalkar
Full Text Available Sign language recognition has emerged in concert of the vital space of analysis in computer Vision. The problem long-faced by the researchers is that the instances of signs vary with each motion and look. Thus, during this paper a completely unique approach for recognizing varied alphabets of Kannada linguistic communication is projected wherever continuous video sequences of the signs are thought of. The system includes of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, bar histogram matching. Eigen values and Eigen Vectors were thought of for feature extraction stage and at last Eigen value weighted Euclidean distance is employed to acknowledge the sign. It deals with vacant hands, so permitting the user to act with the system in natural manner. We have got thought of completely different alphabets within the video sequences and earned a hit rate of 95.25%.
Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer
The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…
REPOSITORIES-89041-N, Software Produc- tivity Consortium Services Corporation, 2214 Rock Hill Road , Herndon, Virginia 22070, June 1989.  Grady...Consortium Services Corporation, 2214 Rock Hill Road , Herndon, Virginia 22070, 1990.  Charles Consel and Frangois Noel. A general approach for run...automation: Language design in the context of domain engenieering . In The 10th International Conference on Software Engineering & Knowl- edge Engineering (SEKE), pages 308-317, San Francisco Bay, California, June 1998.
Chapelle, Carol A.; Voss, Erik
This review article provides an analysis of the research from the last two decades on the theme of technology and second language assessment. Based on an examination of the assessment scholarship published in "Language Learning & Technology" since its launch in 1997, we analyzed the review articles, research articles, book reviews,…
Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
André Nogueira Xavier
Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.
Full Text Available In the article the question of the existence of the Croatian literary language in the semiotic space, i.e. the system of culture, is taken into consideration. In order to affirm the idea of the justification of the very term Croatian language, and thus acceptance of the thesis of the existence of such a language, this argumentation is directed towards theoretical investigation in the semiotic field. There is an attempt to envisage that discussions in the post-Yugoslav linguistics are not the problem, conventionally speaking, ‘ontological’ but ‘epistemological’. Thus, it is not important the question whether the Croatian language or any other language, e.g. Montenegrin, exists but rather the following question: what does it mean that literary language exists or does not exist?
Rose, HL; Conama, JB
Linguistic imperialism—a term used to conceptualize the dominance of one language over others—has been debated in language policy for more than two decades. Spolsky (2004), for example, has questioned whether the spread of English was a result of language planning, or was incidental to colonialism and globalization. Phillipson (2007) contests this view, arguing that linguistic imperialism is not based on ‘conspiracy’, and is underpinned by evidence of explicit or implicit la...
Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de
To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.
Knapp, Heather Patterson; Corina, David P.
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…
Mounty, Judith L; Pucci, Concetta T; Harmon, Kristen C
A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from learning to read to reading to learn. Analysis of 12 interactive, semi-structured interviews identified informal and formal teaching and learning practices in ASL/English bilingual homes and classrooms. These practices value, reinforce, and support the bidirectional acquisition of both languages and provide a strong foundation for literacy. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Full Text Available Drawing on a lengthier review completed for the US National Institute for Literacy, this paper examines emerging technologies that are applicable to self-access and autonomous learning in the areas of listening and speaking, collaborative writing, reading and language structure, and online interaction. Digital media reviewed include podcasts, blogs, wikis, online writing sites, text-scaffolding software, concordancers, multiuser virtual environments, multiplayer games, and chatbots. For each of these technologies, we summarize recent research and discuss possible uses for autonomous language learning.
Cavender, Anna; Vanam, Rahul; Barney, Dane K; Ladner, Richard E; Riskin, Eve A
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana
This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step—and the crucial step—toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means. PMID:24917837
Ashraf, Md Izhar; Sinha, Sitabhra
Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon.
Athale, Ninad; Aldridge, Arianna; Malcarne, Vanessa L.; Nakaji, Melanie; Samady, Waheeda; Sadler, Georgia Robins
Few instruments have been translated and validated for people who use American Sign Language (ASL) as their preferred language. This study examined the reliability and validity of a new ASL version of the widely-used Multidimensional Health Locus of Control (MHLC) scales. Deaf individuals (N = 311) were shown the ASL version via videotape, and their responses were recorded. Confirmatory factor analysis supported the four-factor structure of the MHLC. Scale reliabilities (Cronbach’s alphas) ranged from .60 to .93. There were no apparent gender or ethnic differences. These results provide support for the new ASL version of the MHLC scales. PMID:20511286
N. Y. Gutareva
Full Text Available This article develops the sources of occurrence and the purposes of application of information technologies in teaching of foreign languages from the point of view of linguistics, methods of teaching foreign languages and psychology. The main features of them have been determined in works of native and foreign scientists from the point of view of the basic didactic principles and new standards of selection for working with computer programs are pointed out. In work the author focuses the main attention to modern technologies that in language education in teaching are especially important and demanded as answer the purpose and problems of teaching in foreign languages are equitable to interests of students but they should be safe.Purpose: to determine advantages of using interactive means in teaching foreign languages.Methodology: studying and analysis of psychological, pedagogical and methodological literature on the theme of investigation.Results: the analysis of the purpose and kinds of interactive means has shown importance of its application in practice.Practical implications: it is possible for us to use the results of this work in courses of theory of methodology of teaching foreign languages.
Carolina Hessel Silveira
This paper analyzes two films about deafness which have not been investigated in the Brazilian academic context. They are Mandy (directed by Alexander Mackendrick, 1952, England) and After the Silence (by Fred Gerber, 1996, USA). The analysis is supported by Cultural Studies and Deaf Studies, especially on the concepts of cultural pedagogies, deaf culture, deaf identities, sign language, as well as on the analysis of other films about deaf people conducted by Thoma (2004). Both films are clas...
Millions of Americans in all age groups are affected by deafness and impaired hearing. They communicate with others using the American Sign Language (ASL). Teaching is tutorial (person-to-person) or with limited video content. We believe that high resolution 3D models and their animations can be used to effectively teach the ASL, with the following advantages over the traditional teaching approach: a) signing can be played at varying speeds and as many times as necessary, b) being 3-D constructs, models can be viewed from diverse angles, c) signing can be applied to different characters (male, female, child, elderly, etc.), d) special editing like close-ups, picture-in-picture, and phantom movements, can make learning easier, and e) clothing, surrounding environment and lighting conditions can be varied to present the student to less than ideal situations.
Kontra, Edit H.; Csizer, Kata
The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…
Carolina Hessel Silveira
Full Text Available This paper analyzes two films about deafness which have not been investigated in the Brazilian academic context. They are Mandy (directed by Alexander Mackendrick, 1952, England and After the Silence (by Fred Gerber, 1996, USA. The analysis is supported by Cultural Studies and Deaf Studies, especially on the concepts of cultural pedagogies, deaf culture, deaf identities, sign language, as well as on the analysis of other films about deaf people conducted by Thoma (2004. Both films are classified as drama, and particular attention was given to how deaf characters are represented, highlighting scenes showing the difficulties deaf people face in a hearing society. It is worth noting that in the end of both films the deaf characters manage to speak and hear. The pedagogical impact of these films is questioned as they show that the deaf may be able to speak and hear after using Sign Language. Deaf representations, deaf education and sign language are present in both films, although there is a difference in approach between them.
Carolina Hessel Silveira
Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.
Lieberman, Amy M
Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.
LIEBERMAN, AMY M.
Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21–39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children’s early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence. PMID:26166917
... especially the human use of spoken or written words as a communication system. It is against this background that this study examined the use of Hausa Language in Information and Communication technology specifically as a medium of dissemination of information and or communication. The Information Manager Vol.
Rodríguez, Julio C.
This article describes how a multi-institutional, proficiency-based program engages stakeholders in design thinking to discover and explore solutions to perennial problems in technology integration into world language education (WLE). Examples of replicable activities illustrate the strategies used to fuel innovation efforts, including fostering…
Focus and Scope. The journal is cross-disciplinary and therefore it publishes articles from a wide-range of topics including language, technology, entrepreneurship, finance and communication. It is meant to promote dialogue across disciplines by emphasizing the interconnectedness of knowledge. It is ideal for scholars ...
Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.
Natalia Alexandrovna Kameneva
Full Text Available This article analyzes the use of information technologies in the context of a blended technology approach to learning foreign languages in higher education institutions. Distance learning tools can be categorized as being synchronous (webinar, video conferencing, case-technology, chat, ICQ, Skype, interactive whiteboards or asynchronous (blogs, forums, Twitter, video and audio podcasts, wikis, on-line testing. Sociological and psychological aspects of their application in the educational process are also considered.DOI: http://dx.doi.org/10.12731/2218-7405-2013-8-41
Ramesh Mahadev kagalkar
Full Text Available In the world of signing and gestures, lots of analysis work has been done over the past three decades. This has led to a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a restricted vocabulary. In gift state of affairs, human machine interactive systems facilitate communication between the deaf, and hearing impaired in universe things. So as to boost the accuracy of recognition, several researchers have deployed strategies like HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The most purpose of this paper is to investigate these strategies and to effectively compare them, which can alter the reader to succeed in associate in nursing optimum resolution. This creates each, challenges and opportunities for signing recognition connected analysis. Normal 0 false false false DE JA X-NONE
Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida
To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.
Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.
"Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Full Text Available Dr. Paweł Rutkowski is head of the Section for Sign Linguistics at the University of Warsaw. He is a general linguist and a specialist in the field of syntax of natural languages, carrying out research on Polish Sign Language (polski język migowy — PJM. He has been awarded a number of prizes, grants and scholarships by such institutions as the Foundation for Polish Science, Polish Ministry of Science and Higher Education, National Science Centre, Poland, Polish–U.S. Fulbright Commission, Kosciuszko Foundation and DAAD. Dr. Rutkowski leads the team developing the Corpus of Polish Sign Language and the Corpus-based Dictionary of Polish Sign Language, the first dictionary of this language prepared in compliance with modern lexicographical standards. The dictionary is an open-access publication, available freely at the following address: http://www.slownikpjm.uw.edu.pl/en/. This interview took place at eLex 2017, a biennial conference on electronic lexicography, where Dr. Rutkowski was awarded the Adam Kilgarriff Prize and gave a keynote address entitled Sign language as a challenge to electronic lexicography: The Corpus-based Dictionary of Polish Sign Language and beyond. The interview was conducted by Dr. Victoria Nyst from Leiden University, Faculty of Humanities, and Dr. Iztok Kosem from the University of Ljubljana, Faculty of Arts.
Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.
Manrique, Elizabeth; Enfield, N. J.
Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA). We describe a type of response termed a “freeze-look,” which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a “thinking” face or hesitation, etc.). We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The “freeze-look” results in the questioner “re-doing” their action of asking a question, for example by repeating or rephrasing it. Thus, we argue that the “freeze-look” is a practice for other-initiation of repair. In addition, we argue that it is an “off-record” practice, thus contrasting with known on-record practices such as saying “Huh?” or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well. PMID:26441710
Journal of Language, Technology & Entrepreneurship in Africa: Site Map. Journal Home > About the Journal > Journal of Language, Technology & Entrepreneurship in Africa: Site Map. Log in or Register to get access to full text downloads.
Denmark, Tanya; Marshall, Jane; Mummery, Cath; Roy, Penny; Woll, Bencie; Atkinson, Joanna
Most existing tests of memory and verbal learning in adults were created for spoken languages, and are unsuitable for assessing deaf people who rely on signed languages. In response to this need for sign language measures, the British Sign Language Verbal Learning and Memory Test (BSL-VLMT) was developed. It follows the format of the English language Hopkins Verbal Learning Test Revised, using standardized video-presentation with novel stimuli and instructions wholly in British Sign Language, and no English language requirement. Data were collected from 223 cognitively healthy deaf signers aged 50-89 and 12 deaf patients diagnosed with dementia. Normative data percentiles were derived for clinical use, and receiver-operating characteristic curves computed to explore the clinical potential and diagnostic sensitivity and specificity. The test showed good discrimination between the normative and clinical samples, providing preliminary evidence of clinical utility for identifying learning and memory impairment in older deaf signers with neurodegeneration. This innovative video testing approach transforms the ability to accurately detect memory impairments in deaf people and avoids the problems of using interpreters, with international potential for adapting similar tests into other signed languages. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: email@example.com.
Yang, Xidong; Chen, Xiang; Cao, Xiang; Wei, Shengjing; Zhang, Xu
Chinese Sign Language (CSL) subword recognition based on surface electromyography (sEMG), accelerometer (ACC), and gyroscope (GYRO) sensors was explored in this paper. In order to fuse effectively the information of these three kinds of sensors, the classification abilities of sEMG, ACC, GYRO, and their combinations in three common sign components (one or two handed, hand orientation, and hand amplitude) were evaluated first and then an optimized tree-structure classification framework was proposed for CSL subword recognition. Eight subjects participated in this study and recognition experiments under different testing conditions were implemented on a target set consisting of 150 CSL subwords. The proposed optimized tree-structure classification framework based on sEMG, ACC, and GYRO obtained the best performance among seven different testing conditions with single sensor, paired-sensor fusion, and three-sensor fusion, and the overall recognition accuracies of 94.31% and 87.02% were obtained for 150 CSL subwords in a user-specific test and user-independent test, respectively. Our study could lay a basis for the implementation of large-vocabulary sign language recognition system based on sEMG, ACC, and GYRO sensors.
Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.
Meade, Gabriela; Midgley, Katherine J; Sevcikova Sehyr, Zed; Holcomb, Phillip J; Emmorey, Karen
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.
Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni
Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.
Höcker, J T; Letzel, S; Münster, E
Deaf citizens are confronted with barriers in a health-care system shaped by hearing people. Therefore the German legislature provides a supply with sign language interpreters at the expense of the health insurances. The present study initially examines in how far the deaf are informed about this and use said interpreters. Traditional surveys are based on spoken and written language and therefore are unsuitable for the target audience. Because of this, a cross-sectional online study was performed using sign language videos and visually oriented answers to allow a barrier-free participation. With a multivariate analysis, factors increasing deaf people's risks not to be informed of the supply with interpreters were identified: Of 841 deaf participants, 31.4% were not informed of their rights. 41.3% have experience with an interpreter at the doctor's and report a mainly trouble-free reimbursement of costs. Young and modestly educated deaf have a higher risk of not being informed of the interpreter supply. Further information is necessary to provide equality of opportunities to deaf patients utilising medical benefits. © Georg Thieme Verlag KG Stuttgart · New York.
Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.
Olson, Andrea M; Swabey, Laurie
Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.
Although American Sign Language (ASL) is the third most commonly used primary language in the United States, physicians are often not adequately prepared for the challenges of conducting an interview with a deafpatient who signs. A search of MEDLINE and PsychINFO databases for research on physician-patient communication and deaf people who use ASL was performed. Expert opinion helped guide discussion and recommendations. Few articles examined physician-patient communication involving ASL. Deaf people and their physicians report difficulties with physician-patient communication. Deaf people also report fear that their health care is substandard because of these difficulties. Preparing residents and medical students for working with patients and families who communicate in ASL presents many opportunities for teaching about physician-patient communication. ASL is quite different from English, and users of ASL often have sociocultural norms that differ from those of the majority culture. In addition to learning how to communicate with patients and families across languages and cultures, students and residents can learn how to collaborate with interpreters and how low literacy impacts physician-patient communication. Opportunities to teach about family dynamics, disability issues, and nonverbal communication also present themselves when working with families with Deaf members. Physician-patient communication involving ASL is an area that is ready for further research.
Galla, Candace Kaleimamoowahinekapu
Within the last two decades, there has been increased interest in how technology supports Indigenous language revitalization and reclamation efforts. This paper considers the effect technology has on Indigenous language learning and teaching, while conceptualizing how language educators, speakers, learners, and technology users holistically…
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Full Text Available This paper reports the design and analysis of an American Sign Language (ASL alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…
Dijk, R.J.M. van; Christoffels, I.K.; Postma, A.; Hermans, D.
n two experiments we investigated the relationship between the working memory skills of sign language interpreters and the quality of their interpretations. In Experiment 1, we found that scores on 3-back tasks with signs and words were not related to the quality of interpreted narratives. In
Ryumin, D.; Karpov, A. A.
In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.
Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten
This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....
Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten
monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see......This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....
Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji
The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc
Beecher, Larissa; Childre, Amy
This study evaluated the impact of a comprehensive reading program enhanced with sign language on the literacy and language skills of three elementary school students with intellectual and developmental disabilities. Students received individual and small group comprehensive reading instruction for approximately 55 minutes per session. Reading…
Cawthon, Stephanie W.; Winton, Samantha M.; Garberoglio, Carrie Lou; Gobble, Mark E.
Students who are deaf or hard of hearing (SDHH) often need accommodations to participate in large-scale standardized assessments. One way to bridge the gap between the language of the test (English) and a student's linguistic background (often including American Sign Language [ASL]) is to present test items in ASL. The specific aim of this project…
Appelo, L.; de Jong, Franciska M.G.
TWLT is an acronym of Twente Workshop(s) on Language Technology. These workshops on natural language theory and technology are organised bij Project Parlevink (sometimes with the help of others) a language theory and technology project conducted at the Department of Computer Science of the
Andrew, Kathy N; Hoshooley, Jennifer; Joanisse, Marc F
We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.
Kathy N Andrew
Full Text Available We investigated the robust correlation between American Sign Language (ASL and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.