... combined with facial expressions and postures of the body. It is the primary language of many North Americans who are deaf and ... their eyebrows, widening their eyes, and tilting their bodies forward. Just as with other languages, specific ways of expressing ideas in ASL vary ...
Corina, David P.; Hafer, Sarah; Welch, Kearnan
This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…
Hildebrandt, Ursula; Corina, David
Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)
Tyrone, Martha E; Mauk, Claude E
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.
Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Matthew L Hall
Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.
Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.
American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…
Mann, Wolfgang; Roy, Penny; Morgan, Gary
This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…
Shaw, Emily; Delaporte, Yves
Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .
Malaia, Evie; Borneman, Joshua D; Wilbur, Ronnie B
The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.
Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English interpreters, such as learners' age and education and the characteristics of good citizens within the Deaf community. According to the participants, younger students were adept at language acquisition, whereas older learners more readily understood the purpose of lessons. Children of deaf adults were seen as more culturally aware. The participants' beliefs echoed the theories of P. Freire (1970/1970) that educators consider the reality of each student and their praxis and were responsible for facilitating student self-awareness. Important characteristics in the social construction of students included independence, an appropriate attitude, an understanding of Deaf culture, ethical behavior, community involvement, and a willingness to pursue lifelong learning.
Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Tyrone, Martha E; Mauk, Claude E
Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.
Sherman, Judy; Torres-Crespo, Marisel N.
Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.
Full Text Available In the communication of deaf people between themselves and hearing people there are three basic aspects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The gesture and the movements pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them phonemes. He created three main phoneme categories: hand position, location and movement.Sign languages as spoken languages have background from the distant past. They developed parallel with the development of spoken language and undertook many historical changes. Therefore, today they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.
McKee, Michael M; Winters, Paul C; Sen, Ananda; Zazove, Philip; Fiscella, Kevin
Deaf American Sign Language (ASL) users comprise a linguistic minority population with poor health care access due to communication barriers and low health literacy. Potentially, these health care barriers could increase Emergency Department (ED) use. To compare ED use between deaf and non-deaf patients. A retrospective cohort from medical records. The sample was derived from 400 randomly selected charts (200 deaf ASL users and 200 hearing English speakers) from an outpatient primary care health center with a high volume of deaf patients. Abstracted data included patient demographics, insurance, health behavior, and ED use in the past 36 months. Deaf patients were more likely to be never smokers and be insured through Medicaid. In an adjusted analysis, deaf individuals were significantly more likely to use the ED (odds ratio [OR], 1.97; 95% confidence interval [CI], 1.11-3.51) over the prior 36 months. Deaf American Sign Language users appear to be at greater odds for elevated ED utilization when compared to the general hearing population. Efforts to further understand the drivers for increased ED utilization among deaf ASL users are much needed. Copyright © 2015 Elsevier Inc. All rights reserved.
Bosworth, Rain G.; Emmorey, Karen
Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…
Hoemann, Harry W.; Kreske, Catherine M.
Describes a study that found, contrary to previous reports, that a strong, symmetrical release from proactive interference (PI) is the normal outcome for switches between American Sign Language (ASL) signs and English words and with switches between Manual and English alphabet characters. Subjects were college students enrolled in their first ASL…
McKee, Michael M.; Paasche-Orlow, Michael; Winters, Paul C.; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas
Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, healthcare messages, and health care communication, which when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version of the NVS (ASL-NVS). Forty-eight percent of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research. PMID:26513036
Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are
Beal-Alvarez, Jennifer S.
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…
Kushalnagar, Poorna; Naturale, Joan; Paludneviciene, Raylene; Smith, Scott R.; Werfel, Emily; Doolittle, Richard; Jacobs, Stephen; DeCaro, James
To date, there have been efforts towards creating better health information access for Deaf American Sign Language (ASL) users. However, the usability of websites with access to health information in ASL has not been evaluated. Our paper focuses on the usability of four health websites that include ASL videos. We seek to obtain ASL users’ perspectives on the navigation of these ASL-accessible websites, finding the health information that they needed, and perceived ease of understanding ASL video content. ASL users (N=32) were instructed to find specific information on four ASL-accessible websites, and answered questions related to: 1) navigation to find the task, 2) website usability, and 3) ease of understanding ASL video content for each of the four websites. Participants also gave feedback on what they would like to see in an ASL health library website, including the benefit of added captioning and/or signer model to medical illustration of health videos. Participants who had lower health literacy had greater difficulty in finding information on ASL-accessible health websites. This paper also describes the participants’ preferences for an ideal ASL-accessible health website, and concludes with a discussion on the role of accessible websites in promoting health literacy in ASL users. PMID:24901350
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.
Heiman, Erica; Haynes, Sharon; McKee, Michael
Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. We sought to characterize the self-reported sexual behaviors of Deaf individuals. Responses from 282 Deaf participants aged 18-64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N = 1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs 58.1%), lower-income Deaf (44.4% vs 69.7%) and among less educated Deaf (31.3% vs 57.7%) than among respondents from corresponding general population groups. Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. Copyright © 2015 Elsevier Inc. All rights reserved.
Heiman, Erica; Haynes, Sharon; McKee, Michael
Background Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. Objective We sought to characterize the self-reported sexual behaviors of Deaf individuals. Methods Responses from 282 Deaf participants aged 18–64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N=1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Results Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs. 58.1%), lower-income Deaf (44.4% vs. 69.7%) and among less educated Deaf (31.3% vs. 57.7%) than among respondents from corresponding general population groups. Conclusion Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower-income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. PMID:26242551
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.
A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…
Williams, Joshua T.; Newman, Sharlene D.
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…
Aleksandra KAROVSKA RISTOVSKA
Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.
Quinto-Pozos, David; Singleton, Jenny L; Hauser, Peter C
This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3.5-year period (15;10-19;6) during which we collected linguistic and neuropsychological data. Results revealed that this student has average intelligence, intact visual perceptual skills, visuospatial skills, and motor skills but demonstrates challenges with some memory and sequential processing tasks. Scores from ASL testing signaled language impairment and marked difficulty with fingerspelling. The student also had significant deficits in English vocabulary, spelling, reading comprehension, reading fluency, and writing. Accepted SLI diagnostic criteria exclude deaf individuals from an SLI diagnosis, but the authors propose modified criteria in this work. The results of this study have practical implications for professionals including school psychologists, speech language pathologists, and ASL specialists. The results also support the theoretical argument that SLI can be evident regardless of the modality in which it is communicated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Williams, Joshua T; Newman, Sharlene D
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Full Text Available We describe here the characteristics of a very frequently-occurring ASL indefinite focus particle, which has not previously been recognized as such. We show here that, despite its similarity to the question sign "WHAT", the particle is distinct from that sign in terms of articulation, function, and distribution. The particle serves to express "uncertainty" in various ways, which can be formalized semantically in terms of a domain-widening effect of the same sort as that proposed for English "any" by Kadmon & Landman (1993. Its function is to widen the domain of possibilities under consideration from the typical to include the non-typical as well, along a dimension appropriate in the context.
Schneider, Erin; Kozak, L. Viola; Santiago, Roberto; Stephen, Anika
Technological and language innovation often flow in concert with one another. Casual observation by researchers has shown that electronic communication memes, in the form of abbreviations, have found their way into spoken English. This study focuses on the current use of electronic modes of communication, such as cell smartphones, and e-mail, and…
Bailes, Cynthia Neese; Erting, Lynne C.; Thumann-Prezioso, Carlene; Erting, Carol J.
This longitudinal case study examined the language and literacy acquisition of a Deaf child as mediated by her signing Deaf parents during her first three years of life. Results indicate that the parents' interactions with their child were guided by linguistic and cultural knowledge that produced an intuitive use of child-directed signing (CDSi)…
Beal-Alvarez, Jennifer S.
This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne
When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.
Cannon, Joanna E.; Fredrick, Laura D.; Easterbrooks, Susan R.
Reading to children improves vocabulary acquisition through incidental exposure, and it is a best practice for parents and teachers of children who can hear. Children who are deaf or hard of hearing are at risk for not learning vocabulary as such. This article describes a procedure for using books read on DVD in American Sign Language with…
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.
Maller, S; Singleton, J; Supalla, S; Wix, T
We describe the procedures for constructing an instrument designed to evaluate children's proficiency in American Sign Language (ASL). The American Sign Language Proficiency Assessment (ASL-PA) is a much-needed tool that potentially could be used by researchers, language specialists, and qualified school personnel. A half-hour ASL sample is collected on video from a target child (between ages 6 and 12) across three separate discourse settings and is later analyzed and scored by an assessor who is highly proficient in ASL. After the child's language sample is scored, he or she can be assigned an ASL proficiency rating of Level 1, 2, or 3. At this phase in its development, substantial evidence of reliability and validity has been obtained for the ASL-PA using a sample of 80 profoundly deaf children (ages 6-12) of varying ASL skill levels. The article first explains the item development and administration of the ASL-PA instrument, then describes the empirical item analysis, standard setting procedures, and evidence of reliability and validity. The ASL-PA is a promising instrument for assessing elementary school-age children's ASL proficiency. Plans for further development are also discussed.
Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.
Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…
Lieberman, Amy M
Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.
Olson, Andrea M; Swabey, Laurie
Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…
Hansen, Eric G; Loew, Ruth C; Laitusis, Cara C; Kushalnagar, Poorna; Pagliaro, Claudia M; Kurz, Christopher
There is considerable interest in determining whether high-quality American Sign Language videos can be used as an accommodation in tests of mathematics at both K-12 and postsecondary levels; and in learning more about the usability (e.g., comprehensibility) of ASL videos with two different types of signers - avatar (animated figure) and human. The researchers describe the results of administering each of nine pre-college mathematics items in both avatar and human versions to each of 31 Deaf participants with high school and post-high school backgrounds. This study differed from earlier studies by obliging the participants to rely on the ASL videos to answer the items. While participants preferred the human version over the avatar version (apparently due largely to the better expressiveness and fluency of the human), there was no discernible relationship between mathematics performance and signed version.
Health care providers commonly discuss depressive symptoms with clients, enabling earlier intervention. Such discussions rarely occur between providers and Deaf clients. Most culturally Deaf adults experience early-onset hearing loss, self-identify as part of a unique culture, and communicate in the visual language of American Sign Language (ASL). Communication barriers abound, and depression screening instruments may be unreliable. To train and use ASL interpreters for a qualitative study describing depressive symptoms among Deaf adults. Training included research versus community interpreting. During data collection, interpreters translated to and from voiced English and ASL. Training eliminated potential problems during data collection. Unexpected issues included participants asking for "my interpreter" and worrying about confidentiality or friendship in a small community. Lessons learned included the value of careful training of interpreters prior to initiating data collection, including resolution of possible role conflicts and ensuring conceptual equivalence in real-time interpreting.
Emmorey, Karen; Thompson, Robin; Colvin, Rachael
An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.
Meade, Gabriela; Midgley, Katherine J; Sevcikova Sehyr, Zed; Holcomb, Phillip J; Emmorey, Karen
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…
Full Text Available This paper reports the design and analysis of an American Sign Language (ASL alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.
Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert
Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.
Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan
A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.
McKee, Michael; Thew, Denise; Starr, Matthew; Kushalnagar, Poorna; Reid, John T.; Graybill, Patrick; Velasquez, Julia; Pearson, Thomas
Background Numerous publications demonstrate the importance of community-based participatory research (CBPR) in community health research, but few target the Deaf community. The Deaf community is understudied and underrepresented in health research despite suspected health disparities and communication barriers. Objectives The goal of this paper is to share the lessons learned from the implementation of CBPR in an understudied community of Deaf American Sign Language (ASL) users in the greater Rochester, New York, area. Methods We review the process of CBPR in a Deaf ASL community and identify the lessons learned. Results Key CBPR lessons include the importance of engaging and educating the community about research, ensuring that research benefits the community, using peer-based recruitment strategies, and sustaining community partnerships. These lessons informed subsequent research activities. Conclusions This report focuses on the use of CBPR principles in a Deaf ASL population; lessons learned can be applied to research with other challenging-to-reach populations. PMID:22982845
Full Text Available This article ethnographically explores how American Sign Language-English interpreting students negotiate and foreground different kinds of relationships to claim legitimacy in relation to deaf people and the deaf community. As the field of interpreting is undergoing shifts from community interpreting to professionalization, interpreting students endeavor to legitimize their involvement in the field. Students create distinction between themselves and other students through relational work that involves positive and negative interpretation of kinship terms. In analyzing interpreting students' gate-keeping practices, this article explores the categories and definitions used by interpreting students and argues that there is category trouble that occurs. Identity and kinship categories are not nuanced or critically interrogated, resulting in deaf people and interpreters being represented in static ways.
Hiddinga, A.; Crasborn, O.
Deaf people who form part of a Deaf community communicate using a shared sign language. When meeting people from another language community, they can fall back on a flexible and highly context-dependent form of communication called international sign, in which shared elements from their own sign
Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…
Research on shared reading has shown positive results on children's literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children's visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children's literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques
Kamnardsiri, Teerawat; Hongsit, Ler-on; Khuwuthyakorn, Pattaraporn; Wongta, Noppon
This paper investigated students' achievement for learning American Sign Language (ASL), using two different methods. There were two groups of samples. The first experimental group (Group A) was the game-based learning for ASL, using Kinect. The second control learning group (Group B) was the traditional face-to-face learning method, generally…
Deaf people have long held the belief that American Sign Language (ASL) plays a significant role in the academic development of deaf children. Despite this, the education of deaf children has historically been exclusive of ASL and constructed as an English-only, deficit-based pedagogy. Newer research, however, finds a strong correlation between…
The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…
Bakken Jepsen, Julie
in spoken languages, where a person working as a blacksmith by his friends might be referred to as ‘The Blacksmith’ (‘Here comes the Blacksmith!’) instead of using the person’s first name. Name signs are found not only in Danish Sign Language (DSL) but in most, if not all, sign languages studied to date....... This article provides examples of the creativity of the users of Danish Sign Language, including some of the processes in the use of metaphors, visual motivation and influence from Danish when name signs are created.......A name sign is a personal sign assigned to deaf, hearing impaired and hearing persons who enter the deaf community. The mouth action accompanying the sign reproduces all or part of the formal first name that the person has received by baptism or naming. Name signs can be compared to nicknames...
Pavel, M; Sperling, G; Riedl, T; Vanderbeek, A
To determine the limits of human observers' ability to identify visually presented American Sign Language (ASL), the contrast s and the amount of additive noise n in dynamic ASL images were varied independently. Contrast was tested over a 4:1 range; the rms signal-to-noise ratios (s/n) investigated were s/n = 1/4, 1/2, 1, and infinity (which is used to designate the original, uncontaminated images). Fourteen deaf subjects were tested with an intelligibility test composed of 85 isolated ASL signs, each 2-3 sec in length. For these ASL signs (64 x 96 pixels, 30 frames/sec), subjects' performance asymptotes between s/n = 0.5 and 1.0; further increases in s/n do not improve intelligibility. Intelligibility was found to depend only on s/n and not on contrast. A formulation in terms of logistic functions was proposed to derive intelligibility of ASL signs from s/n, sign familiarity, and sign difficulty. Familiarity (ignorance) is represented by additive signal-correlated noise; it represents the likelihood of a subject's knowing a particular ASL sign, and it adds to s/n. Difficulty is represented by a multiplicative difficulty coefficient; it represents the perceptual vulnerability of an ASL sign to noise and it adds to log(s/n).
Smith, Cynthia; Morgan, Robert L.
There have been increasing incidents of innocent people who use American Sign Language (ASL) or another form of sign language being victimized by gang violence due to misinterpretation of ASL hand formations. ASL is familiar to learners with a variety of disabilities, particularly those in the deaf community. The problem is that gang members have…
Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia
The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Aleksandra KAROVSKA RISTOVSKA
Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural ...
Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.
The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…
Van Herreweghe, Mieke; Vermeerbergen, Myriam
In 1997, the Flemish Deaf community officially rejected standardisation of Flemish Sign Language. It was a bold choice, which at the time was not in line with some of the decisions taken in the neighbouring countries. In this article, we shall discuss the choices the Flemish Deaf community has made in this respect and explore why the Flemish Deaf…
Journal of Fundamental and Applied Sciences. Journal Home · ABOUT ... SL recognition system based on the Malaysian Sign Language (MSL). Implementation results are described. Keywords: sign language; pattern classification; database.
Hirshorn, Elizabeth A.; Fernandez, Nina M.; Bavelier, Daphne
Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se. PMID:22871205
de Vos, C.; Pfau, R.
Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western
Kushalnagar, Poorna; Smith, Scott; Hopper, Melinda; Ryan, Claire; Rinkevich, Micah; Kushalnagar, Raja
People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.
Full Text Available Little attention has been given to involving the deaf community in distance teaching and learning or in designing courses that relate to their language and culture. This article reports on the design and development of video-based learning objects created to enhance the educational experiences of American Sign Language (ASL hearing participants in a distance learning course and, following the course, the creation of several new applications for use of the learning objects. The learning objects were initially created for the web, as a course component for review and rehearsal. The value of the web application, as reported by course participants, led us to consider ways in which the learning objects could be used in a variety of delivery formats: CD-ROM, web-based knowledge repository, and handheld device. The process to create the learning objects, the new applications, and lessons learned are described.
Rodríguez Ortiz, I R
This study aims to answer the question, how much of Spanish Sign Language interpreting deaf individuals really understand. Study sampling included 36 deaf people (deafness ranging from severe to profound; variety depending on the age at which they learned sign language) and 36 hearing people who had good knowledge of sign language (most were interpreters). Sign language comprehension was assessed using passages of secondary level. After being exposed to the passages, the participants had to tell what they had understood about them, answer a set of related questions, and offer a title for the passage. Sign language comprehension by deaf participants was quite acceptable but not as good as that by hearing signers who, unlike deaf participants, were not only late learners of sign language as a second language but had also learned it through formal training.
Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.
Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…
Adam Schembri; Jordan Fenlon; Kearsy Cormier; Trevor Johnston
This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflec...
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Ciaramello, Frank M.; Hemami, Sheila S.
Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.
Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…
Mendoza, Mary Elizabeth
In the course of their work, sign language interpreters are faced with ethical dilemmas that require prioritizing competing moral beliefs and views on professional practice. There are several decision-making models, however, little research has been done on how sign language interpreters learn to identify and make ethical decisions. Through surveys and interviews on ethical decision-making, this study investigates how expert and novice interpreters discuss their ethical decision-making proces...
Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor
This paper examines the possible relationship between proposed social determinants of morphological 'complexity' and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological 'complexification'), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.
Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor
This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored. PMID:29515506
Full Text Available This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011, applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’, the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005; in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.
Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
The entries of the The Danish Sign Language Dictionary have four sections: Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons. Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through: Handshape: Particular handshapes for the active and the passive hand can be specified...... to find signs that are not themselves lemmas in the dictionary, but appear in example sentences. Topic: Topics can be chosen as search criteria from a list of 70 topics....
Fisher, Jami N.
Most postsecondary American Sign Language programs have an inherent connection to their local Deaf communities and rely on the community's events to provide authentic linguistic and cultural experiences for their students. While this type of activity benefits students, there is often little effort toward meaningful engagement or attention to…
Notarrigo, Ingrid; Meurant, Laurence; Van Herreweghe, Mieke; Vermeerbergen, Myriam
Repetition was described in the nineties by a limited number of sign linguists: Vermeerbergen & De Vriendt (1994) looked at a small corpus of VGT data, Fisher & Janis (1990) analysed “verb sandwiches” in ASL and Pinsonneault (1994) “verb echos” in Quebec Sign Language. More recently the same phenomenon has been the focus of research in a growing number of signed languages, including American (Nunes and de Quadros 2008), Hong Kong (Sze 2008), Russian (Shamaro 2008), Polish (Flilipczak and Most...
Hommes, Rachel E; Borash, Amy I; Hartwig, Kari; DeGracia, Donna
Communication barriers between healthcare providers and patients contribute to health disparities and the effectiveness of health promotion messages. This is especially true regarding communication between providers and deaf and hard of hearing (HOH) patients due to lack of understanding of cultural and linguistic differences, ineffectiveness of various means of communication and level of health literacy within that population. This research aimed to identify American Sign Language (ASL) interpreters' perceptions of barriers to effective communication between deaf and HOH patients and healthcare providers. We conducted a survey of ASL interpreters attending the 2015 National Symposium on Healthcare Interpreting with an overall response rate of 25%. Results indicated a significant difference (p communication between providers and deaf/HOH patients as perceived by interpreters. ASL interpreters observed that patients did not understand provider instructions in nearly half of appointments. Eighty-one percent of interpreters said that providers "hardly ever" use "teach-back" methods with patients to ensure understanding. A focus on improving health care and health promotion efforts in the deaf/HOH community depends on improving communication, health literacy, and patient empowerment and involves holding health care organizations accountable for assuring adequate staffing of ASL interpreters and communication resources in order to reduce health disparities in this population.
Beal-Alvarez, Jennifer S.; Scheetz, Nanci A.
In deaf education, the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and…
Kimmelman, V.; Paperno, D.; Keenan, E.L.
After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,
This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...
Haug, Tobias; Mann, Wolfgang
Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another.
Benedicto, E.; Cvejanov, S.; Quer, J.; Quer, J.F.
This paper provides a comparative analysis of the structural properties of serial verb constructions (SVC) in three sign languages: LSA (Lengua de Señas Argentina, Argentinean Sign Language), LSC (Llengua de Signes Catalana, Catalan Sign Language) and ASL (American Sign Language). The paper presents
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... – or are they homonyms?" and so on. Very often such questions demand further research and can't be answered sufficiently through a simple standard formula. Therefore lexicographic work often seems like an endless series of compromises. Another source of compromise arises when you set out to decide which information...... this dilemma, as we see DTS learners and teachers as well as native DTS signers as our target users. In the following we will focus on four problem areas with particular relevance for the sign language lexicographer: Sign representation Spoken languague equivalents and mouth movements Example sentences Partial...
This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…
De Meulder, Maartje
This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…
Kimmelman, V.; Vink, L.
Several sign languages of the world utilize a construction that consists of a question followed by an answer, both of which are produced by the same signer. For American Sign Language, this construction has been analyzed as a discourse-level rhetorical question construction (Hoza et al. 1997), as a
This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…
Schmaling, Constanze H.
This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…
Kaneko, Michiko; Mesch, Johanna
This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…
Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…
Pendergrass, Kathy M; Nemeth, Lynne; Newman, Susan D; Jenkins, Carolyn M; Jones, Elaine G
Nurse practitioners (NPs), as well as all healthcare clinicians, have a legal and ethical responsibility to provide health care for deaf American Sign Language (ASL) users equal to that of other patients, including effective communication, autonomy, and confidentiality. However, very little is known about the feasibility to provide equitable health care. The purpose of this study was to examine NP perceptions of barriers and facilitators in providing health care for deaf ASL users. Semistructured interviews in a qualitative design using a socio-ecological model (SEM). Barriers were identified at all levels of the SEM. NPs preferred interpreters to facilitate the visit, but were unaware of their role in assuring effective communication is achieved. A professional sign language interpreter was considered a last resort when all other means of communication failed. Gesturing, note-writing, lip-reading, and use of a familial interpreter were all considered facilitators. Interventions are needed at all levels of the SEM. Resources are needed to provide awareness of deaf communication issues and legal requirements for caring for deaf signers for practicing and student NPs. Protocols need to be developed and present in all healthcare facilities for hiring interpreters as well as quick access to contact information for these interpreters. ©2017 American Association of Nurse Practitioners.
Ten Holt, G.A.
Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors,
In this thesis, the native sign language used by deaf Inuit people is described. Inuit Sign Language (IUR) is used by less than 40 people as their sole means of communication, and is therefore highly endangered. Apart from the description of IUR as such, an additional goal is to contribute to the
Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
This dissertation explores Information Structure in two sign languages: Sign Language of the Netherlands and Russian Sign Language. Based on corpus data and elicitation tasks we show how topic and focus are expressed in these languages. In particular, we show that topics can be marked syntactically
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....
Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen
We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…
Wang, Jihong; Napier, Jemina
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…
In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration. On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...
Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny
This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…
Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.
Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…
Palmer, Christina G S; Boudreault, Patrick; Berman, Barbara A; Wolfson, Alicia; Duarte, Lionel; Venne, Vickie L; Sinsheimer, Janet S
Deaf American Sign Language-users (ASL) have limited access to cancer genetics information they can readily understand, increasing risk for health disparities. We compared effectiveness of online cancer genetics information presented using a bilingual approach (ASL with English closed captioning) and a monolingual approach (English text). Bilingual modality would increase cancer genetics knowledge and confidence to create a family tree; education would interact with modality. We used a parallel 2:1 randomized pre-post study design stratified on education. 150 Deaf ASL-users ≥18 years old with computer and internet access participated online; 100 (70 high, 30 low education) and 50 (35 high, 15 low education) were randomized to the bilingual and monolingual modalities. Modalities provide virtually identical content on creating a family tree, using the family tree to identify inherited cancer risk factors, understanding how cancer predisposition can be inherited, and the role of genetic counseling and testing for prevention or treatment. 25 true/false items assessed knowledge; a Likert scale item assessed confidence. Data were collected within 2 weeks before and after viewing the information. Significant interaction of language modality, education, and change in knowledge scores was observed (p = .01). High education group increased knowledge regardless of modality (Bilingual: p information than a monolingual approach. Copyright Â© 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.
Taylor, Blaine J.
Many advocates of the deaf fear that a whole generation of deaf children will be lost emotionally. socially. and educationally. This fear stems from the fact that many children who are deaf are not having their linguistic. sociocultural. and communicative needs met at home or at school (King, 1993). Their needs are not met primarily for three reasons. First. the hearing culture is often inaccessible to them because they do not understand most of the spoken language around them. When children ...
Goldin-Meadow, Susan; Brentari, Diane
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…
Kushalnagar, Poorna; Harris, Raychelle; Paludneviciene, Raylene; Hoglind, TraciAnn
The Health Information National Trends Survey (HINTS) collects nationally representative data about the American's public use of health-related information. This survey is available in English and Spanish, but not in American Sign Language (ASL). Thus, the exclusion of ASL users from these national health information survey studies has led to a significant gap in knowledge of Internet usage for health information access in this underserved and understudied population. The objectives of this study are (1) to culturally adapt and linguistically translate the HINTS items to ASL (HINTS-ASL); and (2) to gather information about deaf people's health information seeking behaviors across technology-mediated platforms. We modified the standard procedures developed at the US National Center for Health Statistics Cognitive Survey Laboratory to culturally adapt and translate HINTS items to ASL. Cognitive interviews were conducted to assess clarity and delivery of these HINTS-ASL items. Final ASL video items were uploaded to a protected online survey website. The HINTS-ASL online survey has been administered to over 1350 deaf adults (ages 18 to 90 and up) who use ASL. Data collection is ongoing and includes deaf adult signers across the United States. Some items from HINTS item bank required cultural adaptation for use with deaf people who use accessible services or technology. A separate item bank for deaf-related experiences was created, reflecting deaf-specific technology such as sharing health-related ASL videos through social network sites and using video remote interpreting services in health settings. After data collection is complete, we will conduct a series of analyses on deaf people's health information seeking behaviors across technology-mediated platforms. HINTS-ASL is an accessible health information national trends survey, which includes a culturally appropriate set of items that are relevant to the experiences of deaf people who use ASL. The final HINTS
Hernández, Cesar; Pulido, Jose L; Arias, Jorge E
To develop a technological tool that improves the initial learning of sign language in hearing impaired children. The development of this research was conducted in three phases: the lifting of requirements, design and development of the proposed device, and validation and evaluation device. Through the use of information technology and with the advice of special education professionals, we were able to develop an electronic device that facilitates the learning of sign language in deaf children. This is formed mainly by a graphic touch screen, a voice synthesizer, and a voice recognition system. Validation was performed with the deaf children in the Filadelfia School of the city of Bogotá. A learning methodology was established that improves learning times through a small, portable, lightweight, and educational technological prototype. Tests showed the effectiveness of this prototype, achieving a 32 % reduction in the initial learning time for sign language in deaf children.
Brentari, Diane; Coppola, Marie
How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.
This podcast is based on the May 2017 CDC Vital Signs report. The life expectancy of African Americans has improved, but it's still an average of four years less than whites. Learn what can be done so all Americans can have the opportunity to pursue a healthy lifestyle.
... Are Here: Home → Multiple Languages → All Health Topics → Asian American Health URL of this page: https://medlineplus.gov/ ... V W XYZ List of All Topics All Asian American Health - Multiple Languages To use the sharing features ...
Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso
In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…
This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…
In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…
El Ghoul, Oussama; Jemni, Mohamed
Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.
Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...
Orfanidou, E.; McQueen, J.; Adam, R.; Morgan, G.
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were prec...
Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…
Reeves, J B; Newell, W; Holcomb, B R; Stinson, M
In collaboration with teachers and students at the National Technical Institute for the Deaf (NTID), the Sign Language Skills Classroom Observation (SLSCO) was designed to provide feedback to teachers on their sign language communication skills in the classroom. In the present article, the impetus and rationale for development of the SLSCO is discussed. Previous studies related to classroom signing and observation methodology are reviewed. The procedure for developing the SLSCO is then described. This procedure included (a) interviews with faculty and students at NTID, (b) identification of linguistic features of sign language important for conveying content to deaf students, (c) development of forms for recording observations of classroom signing, (d) analysis of use of the forms, (e) development of a protocol for conducting the SLSCO, and (f) piloting of the SLSCO in classrooms. The results of use of the SLSCO with NTID faculty during a trial year are summarized.
Nonaka, Angela M
Communication obstacles in health care settings adversely impact patient-practitioner interactions by impeding service efficiency, reducing mutual trust and satisfaction, or even endangering health outcomes. When interlocutors are separated by language, interpreters are required. The efficacy of interpreting, however, is constrained not just by interpreters' competence but also by health care providers' facility working with interpreters. Deaf individuals whose preferred form of communication is a signed language often encounter communicative barriers in health care settings. In those environments, signing Deaf people are entitled to equal communicative access via sign language interpreting services according to the Americans with Disabilities Act and Executive Order 13166, the Limited English Proficiency Initiative. Yet, litigation in states across the United States suggests that individual and institutional providers remain uncertain about their legal obligations to provide equal communicative access. This article discusses the legal and ethical imperatives for using professionally certified (vs. ad hoc) sign language interpreters in health care settings. First outlining the legal terrain governing provision of sign language interpreting services, the article then describes different types of "sign language" (e.g., American Sign Language vs. manually coded English) and different forms of "sign language interpreting" (e.g., interpretation vs. transliteration vs. translation; simultaneous vs. consecutive interpreting; individual vs. team interpreting). This is followed by reviews of the formal credentialing process and of specialized forms of sign language interpreting-that is, certified deaf interpreting, trilingual interpreting, and court interpreting. After discussing practical steps for contracting professional sign language interpreters and addressing ethical issues of confidentiality, this article concludes by offering suggestions for working more effectively
Full Text Available of work made in SASL. There is currently no collection of the cultural and linguistic heritage of SASL. Public signage and localisation: Provision for SASL-specifi c sign names of places, people, companies and brands, as well as the localisation... upgrading the aging data and voice infrastructures for visual grade technologies, new usages of technologies will emerge in public signage and communications, in advertising and for visual languages such as SASL. Research and development in Sign Language...
Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.
Full Text Available , the fact that the target structure is SASL, the home language of the Deaf user, already facilitates the communication. Ul- timately the message will be delivered more naturally by a signing avatar . We shall present further scenarios for future... Work 6.1 Disambiguation Disambiguation can be improved on two levels: firstly, by eliciting more or better information from the user through the AAC interface and secondly, by improving certain as- pects of the MT system. We discuss both...
Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse
Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
Barnes, Susan Kubic
Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…
Orfanidou, E.; McQueen, J.M.; Adam, R.; Morgan, G.
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Shield, Aaron; Cooley, Frances; Meier, Richard P.
Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…
Full Text Available In signed and spoken language sentences, imperative mood and the corresponding speech acts such as for instance, command, permission or advice, can be distinguished by morphosyntactic structures, but also solely by prosodic cues, which are the focus of this paper. These cues can express paralinguistic mental states or grammatical meaning, and we show that in American Sign Language (ASL, they also exhibit the function, scope, and alignment of prosodic, linguistic elements of sign languages. The production and comprehension of prosodic facial expressions and temporal patterns therefore can shed light on how cues are grammaticalized in sign languages. They can also be informative about the formal semantic and pragmatic properties of imperative types not only in ASL, but also more broadly. This paper includes three studies: one of production (Study 1 and two of comprehension (Studies 2 and 3. In Study 1, six prosodic cues are analyzed in production: temporal cues of sign and hold duration, and non-manual cues including tilts of the head, head nods, widening of the eyes, and presence of mouthings. Results of Study 1 show that neutral sentences and commands are well distinguished from each other and from other imperative speech acts via these prosodic cues alone; there is more limited differentiation among explanation, permission, and advice. The comprehension of these five speech acts is investigated in Deaf ASL signers in Study 2, and in three additional groups in Study 3: Deaf signers of German Sign Language (DGS, hearing non-signers from the United States, and hearing non-signers from Germany. Results of Studies 2 and 3 show that the ASL group performs significantly better than the other 3 groups and that all groups perform above chance for all meaning types in comprehension. Language-specific knowledge, therefore, has a significant effect on identifying imperatives based on targeted cues. Command has the most cues associated with it and is the
Komakula, Sirisha. T.; Burr, Robert. B.; Lee, James N.; Anderson, Jeffrey
We present a case of right hemispheric dominance for sign language but left hemispheric dominance for reading, in a left-handed deaf patient with epilepsy and left mesial temporal sclerosis. Atypical language laterality for ASL was determined by preoperative fMRI, and congruent with ASL modified WADA testing. We conclude that reading and sign language can have crossed dominance and preoperative fMRI evaluation of deaf patients should include both reading and sign language evaluations.
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas
This article presents findings from the first major study to investigate lexical variation and change in British Sign Language (BSL) number signs. As part of the BSL Corpus Project, number sign variants were elicited from 249 deaf signers from eight sites throughout the UK. Age, school location, and language background were found to be significant…
Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
de Quadros, Ronice Muller
This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…
This article examines several legal cases in Canada, the USA, and Australia involving signed language in education for Deaf students. In all three contexts, signed language rights for Deaf students have been viewed from within a disability legislation framework that either does not extend to recognizing language rights in education or that…
In this paper we describe topic marking in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) and discuss whether these languages should be considered topic prominent. The formal markers of topics in RSL are sentence-initial position, a prosodic break following the topic, and
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Russian Sign Language (RSL) makes use of constructions involving manual simultaneity, in particular, weak hand holds, where one hand is being held in the location and configuration of a sign, while the other simultaneously produces one sign or a sequence of several signs. In this paper, I argue that
No formal Canadian curriculum presently exists for teaching American Sign Language (ASL) as a second language to parents of deaf and hard of hearing children. However, this group of ASL learners is in need of more comprehensive, research-based support, given the rapid expansion in Canada of universal neonatal hearing screening and the…
De Meulder, Maartje
Through the British Sign Language (Scotland) Act, British Sign Language (BSL) was given legal status in Scotland. The main motives for the Act were a desire to put BSL on a similar footing with Gaelic and the fact that in Scotland, BSL signers are the only group whose first language is not English who must rely on disability discrimination…
Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.
Cempre, Luka; Bešir, Aleksander; Solina, Franc
The article describes technical and user-interface issues of transferring the contents and functionality of the CD-ROM version of the Slovenian sing language dictionary to the web. The dictionary of Slovenian sign language consist of video clips showing the demonstra- tion of signs that deaf people use for communication, text description of the words corresponding to the signs and pictures illustrating the same word/sign. A new technical solution—a video sprite—for concatenating subsections o...
Johnson, William L
Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...
McKee, Rachel Locker; Manning, Victoria
Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Ortega, Gerardo; Morgan, Gary
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…
The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.
Parton, Becky Sue
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.
This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…
Log in or Register to get access to full text downloads. ... Poetry in a sign language can make use of literary devices just as poetry in a ... This poem illustrates well the multi-layered meaning that can be created in sign language poetry through ...
Russo, Tommaso; Giuranna, Rosaria; Pizzuto, Elena
Explores and describes from a crosslinguistic perspective, some of the major structural irregularities that characterize poetry in Italian Sign Language and distinguish poetic from nonpoetic texts. Reviews findings of previous studies of signed language poetry, and points out issues that need to be clarified to provide a more accurate description…
Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.
This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.
Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam
This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…
This systematic review of the literature provides a synthesis of research on the use of technology to support sign language. Background research on the use of sign language with students who are deaf/hard of hearing and students with low incidence disabilities, such as autism, intellectual disability, or communication disorders is provided. The…
Armstrong, David F.
As most readers of this journal are aware, "Sign Language Studies" ("SLS") served for many years as effectively the only serious scholarly outlet for work in the nascent field of sign language linguistics. Now reaching its 40th anniversary, the journal was founded by William C. Stokoe and then edited by him for the first quarter century of its…
De Clerck, Goedele A. M.
This article has been excerpted from "Introduction: Sign Language, Sustainable Development, and Equal Opportunities" (De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck & P. V. Paul (Eds.) 2016). The idea of exploring various…
Vesel, J.; Hurdich, J.
TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.
Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon
When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.
Peng, Fred C. C., Ed.
A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…
Monney, M. (Mariette)
Abstract Finding new methods to achieve the goals of Education For All is a constant worry for primary school teachers. Multisensory methods have been proved to be efficient in the past decades. Sign Language, being a visual and kinesthetic language, could become a future educational tool to fulfill the needs of a growing diversity of learners. This ethnographic study describes how Sign Language exposure in inclusive classr...
Akmese, Pelin Pistav
Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…
Palmer, Jeffrey Levi; Reynolds, Wanette; Minor, Rebecca
This pilot study examines whether the increased virtual "mobility" of ASL users via videophone and video-relay services is contributing to the standardization of ASL. In addition, language attitudes are identified and suggested to be influencing the perception of correct versus incorrect standard forms. ASL users around the country have their own…
K. Crom Saunders
Full Text Available Social media has become a venue for social awareness and change through forum discussions and exchange of viewpoints and information. The rate at which awareness and cultural understanding regarding specific issues has not been quantified, but examining awareness about issues relevant to American Sign Language (ASL and American Deaf culture indicates that progress in increasing awareness and cultural understanding via social media faces greater friction and less progress compared to issues relevant to other causes and communities, such as feminism, the lesbian, gay, bisexual, and transgender (LGBT community, or people of color. The research included in this article examines online disinhibition, cyberbullying, and audism as it appears in the real world and online, advocacy for and against Deafness as a cultural identity, and a history of how Deaf people are represented in different forms of media, including social media. The research itself is also examined in terms of who conducts the research. The few incidents of social media serving the Deaf community in a more positive manner are also examined. This is to provide contrast to determine which factors may contribute to greater progress in fostering greater awareness of Deaf cultural issues without the seemingly constant presence of resistance and lack of empathy for the Deaf community’s perspectives on ASL and Deaf culture.
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Sprenger, Kristen; Mathur, Gaurav
This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Gameiro, João Manuel Ferreira
Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar soluti...
Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…
The translation of biblical texts into South African Sign Language. ... Native signers were used as translators with the assistance of hearing specialists in the fields of religion and translation studies. ... AJOL African Journals Online. HOW TO ...
... is n example of a contemporary sign language dictionary that leverages the 21st ... informed development of this bilingual, bi-directional, multimedia dictionary. ... and dealing with sociolinguistic variation in the selection and performance of ...
Hollman, Liivi; Sutrop, Urmas
The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…
Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Naomi Kenney Caselli
Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.
Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.
Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette
This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…
Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy
A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…
Mary Theresa Biberauer
The study of literary expression in sign languages has increased over the last twenty .... extensively to express emotion on the part of a character in the narrative. ... township in her non-manual facial expressions while signing manually what is ...
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...
Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.
This article discusses several aspects of language planning with respect to Sign Language of the Netherlands, or Nederlandse Gebarentaal (NGT). For nearly thirty years members of the Deaf community, the Dutch Deaf Council (Dovenschap) have been working together with researchers, several organizations in deaf education, and the organization of…
Singha, Joyeeta; Das, Karen
Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.
The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.
Rudner, Mary; Andin, Josefine; Rönnberg, Jerker; Heimann, Mikael; Hermansson, Anders; Nelson, Keith; Tjus, Tomas
The literacy skills of deaf children generally lag behind those of their hearing peers. The mechanisms of reading in deaf individuals are only just beginning to be unraveled but it seems that native language skills play an important role. In this study 12 deaf pupils (six in grades 1-2 and six in grades 4-6) at a Swedish state primary school for…
Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.
Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory
The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.
Parton, Becky Sue
Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Silvana Aguiar dos Santos
Full Text Available http://dx.doi.org/10.5007/1984-8420.2015v16n2p101 This paper is the result of an initial attempt to establish a connection between Brazil and Mozambique regarding sign language translation and interpreting. It reviews some important landmarks in language policies aimed at sign languages in these countries and discusses how certain actions directly impact political decisions related to sign language translation and interpreting. In this context, two lines of argument are developed. The first one addresses the role of sign language translation and interpreting in the Portuguese-speaking context, since Portuguese is the official language in both countries; the other offers some reflections about the Deaf movements and the movements of sign language translators and interpreters, the legal recognition of sign languages, the development of undergraduate courses and the contemporary challenges in the work of translation professionals. Finally, it is suggested that sign language translators and interpreters in both Brazil and Mozambique undertake efforts to press government bodies to invest in: (i area-specific training for translators and interpreters, (ii qualification of the services provided by such professionals, and (iii development of human resources at master’s and doctoral levels in order to strengthen research on sign language translation and interpreting in the Community of Portuguese-Speaking Countries.
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into
Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills
Corina, David P; Knapp, Heather
In this paper we review evidence for frontal and parietal lobe involvement in sign language comprehension and production, and evaluate the extent to which these data can be interpreted within the context of a mirror neuron system for human action observation and execution. We present data from three literatures--aphasia, cortical stimulation, and functional neuroimaging. Generally, we find support for the idea that sign language comprehension and production can be viewed in the context of a broadly-construed frontal-parietal human action observation/execution system. However, sign language data cannot be fully accounted for under a strict interpretation of the mirror neuron system. Additionally, we raise a number of issues concerning the lack of specificity in current accounts of the human action observation/execution system.
Baus, Cristina; Costa, Albert
This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated. Copyright © 2015 Elsevier B.V. All rights reserved.
Parks, Elizabeth S.
In this paper, I use a holographic metaphor to explain the identification of overlapping sign language communities in Panama. By visualizing Panama's complex signing communities as emitting community "hotspots" through social drama on multiple stages, I employ ethnographic methods to explore overlapping contours of Panama's sign language…
This paper is a thought experiment exploring the possibility of establishing universal bilingualism in Sign Languages. Focusing in the first part on historical examples of inclusive signing societies such as Martha's Vineyard, the author suggests that it is not possible to create such naturally occurring practices of Sign Bilingualism in societies…
Marshall, Chloë R; Morgan, Gary
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.
Kim, Jonghwa; Wagner, Johannes; Rehm, Matthias
In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel senso......-independent condition, where subjective differences do not allow for high recognition rates. Finally we discuss a problem of feature-level fusion caused by high disparity between accuracies of each single channel classification....
Jones, T; Cumberbatch, K
The introduction of the landmark mandatory teaching of sign language to undergraduate dental students at the University of the West Indies (UWI), Mona Campus in Kingston, Jamaica, to bridge the communication gap between dentists and their patients is reviewed. A review of over 90 Doctor of Dental Surgery and Doctor of Dental Medicine curricula in North America, the United Kingdom, parts of Europe and Australia showed no inclusion of sign language in those curricula as a mandatory component. In Jamaica, the government's training school for dental auxiliaries served as the forerunner to the UWI's introduction of formal training of sign language in 2012. Outside of the UWI, a couple of dental schools have sign language courses, but none have a mandatory programme as the one at the UWI. Dentists the world over have had to rely on interpreters to sign with their deaf patients. The deaf in Jamaica have not appreciated the fact that dentists cannot sign and they have felt insulted and only go to the dentist in emergency situations. The mandatory inclusion of sign language in the Undergraduate Dental Programme curriculum at The University of the West Indies, Mona Campus, sought to establish a direct communication channel to formally bridge this gap. The programme of two sign language courses and a direct clinical competency requirement was developed during the second year of the first cohort of the newly introduced undergraduate dental programme through a collaborating partnership between two faculties on the Mona Campus. The programme was introduced in 2012 in the third year of the 5-year undergraduate dental programme. To date, two cohorts have completed the programme, and the preliminary findings from an ongoing clinical study have shown a positive impact on dental care access and dental treatment for deaf patients at the UWI Mona Dental Polyclinic. The development of a direct communication channel between dental students and the deaf that has led to increased dental
Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary
This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.
Abdel-Khalek, Ahmed; Lester, David
Samples of Kuwaiti (N=460) and American (N=273) undergraduates responded to six personality questionnaires to assess optimism, pessimism, suicidal ideation, ego-grasping, death anxiety, general anxiety, and obssessive-compulsiveness. Each participant was assigned to the astrological sign associated with date of birth. One-way analyses of variance yielded nonsignificant F ratios for all the seven scales in both Kuwaiti and American samples, except for anxiety scores among Americans. It was concluded that there was little support for an association between astrological sun signs and scores on the present personality scales.
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Knapp, Heather Patterson; Corina, David P
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.
Van Staden, Annalene
Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.
Mateer, C A; Rapport, R L; Kettrick, C
A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.
Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.
This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.
Tomita, Nozomi; Kozak, Viola
This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…
Manrique Cordeje, M.E.
How does (mis)understanding works in conversation? Problems of understanding occur all the time in our everyday social life. How does miscommunication happen and how do we deal with it? This thesis reports on how sign language users manage to understand each other based on a large Conversational
Baker-Ramos, Leslie K.
The purpose of this teacher inquiry is to explore the effects of signing and gesturing on the expressive language development of non-verbal children. The first phase of my inquiry begins with the observations of several non-verbal students with various etiologies in three different educational settings. The focus of these observations is to…
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and
Hammer, A.; van den Bogaerde, B.; Cirillo, L.; Niemants, N.
We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based predominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the
Annemiek Hammer; Dr. Beppie van den Bogaerde
We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based pre-dominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the
Bosnar-Valković, Brigita; Blazević, Nevenka; Gjuran-Coha, Anamarija
The USA is spreading their political, military, economic, scientific, artistic and cultural mission throughout the world. The aim of this paper is to bring to the attention the Americanization of the Croatian language particularly evident in the newly adopted language manners, in teenage language, in specialist languages, in the field of advertising and in political correctness. The spread of Americanization of the Croatian language has both negative and positive effects. Positive effects can be regarded as enrichment of the Croatian language, whereas the negative ones endanger its deep structure. Positive effects should be supported and negative minimized through the cooperation between experts in linguistics and politics.
Zaharia, Titus; Preda, Marius; Preteux, Francoise J.
In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.
Ortega, Gerardo; Morgan, Gary
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further,…
Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel
The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.
Senghas, A; Coppola, M
It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.
This podcast is based on the May 2017 CDC Vital Signs report. The life expectancy of African Americans has improved, but itâs still an average of four years less than whites. Learn what can be done so all Americans can have the opportunity to pursue a healthy lifestyle. Created: 5/2/2017 by Centers for Disease Control and Prevention (CDC). Date Released: 5/2/2017.
MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.
Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.
Batterbury, Sarah C. E.
Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…
As this passage suggests, there is extensive and growing literature, both in .... For instance, sign language mediates experience in a unique way, as of ..... entail Deaf students studying together, in a setting not unlike that provided by residential .... of ASL as a foreign language option in secondary schools and universities.
L. Leeson; Dr. Beppie van den Bogaerde; Tobias Haug; C. Rathmann
This resource establishes European standards for sign languages for professional purposes in line with the Common European Framework of Reference for Languages (CEFR) and provides an overview of assessment descriptors and approaches. Drawing on preliminary work undertaken in adapting the CEFR to
Yang, Su; Zhu, Qing
The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Kristoffersen, Jette Hedegaard; Boye Niemela, Janne
The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...
Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor
This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…
Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca
Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…
Green, Lisa J.
How do children acquire African American English? How do they develop the specific language patterns of their communities? Drawing on spontaneous speech samples and data from structured elicitation tasks, this book explains the developmental trends in the children's language. It examines topics such as the development of tense/aspect marking,…
Full Text Available example, between a deaf person who can sign and an able person or a person with a different disability who cannot sign). METHODOLOGY A signing avatar is set up to work together with a chatterbot. The chatterbot is a natural language dialogue interface... are then offered in sign language as the replies are interpreted by a signing avatar, a living character that can reproduce human-like gestures and expressions. To make South African Sign Language (SASL) available digitally, computational models of the language...
Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy
This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region. PMID:24759673
Samir Abou El-Seoud
Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.
Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.
Full Text Available /detail.shtml?i=41 Eberius, Wolfram (2008): Multimodale Erwiterung Und Distribution Von Digital Talking Books. Germany: Technische universität Dresden. Fédération Internationale de Football Association (2008): Laws of the Game 2008/2009. Switzerland: FIFA... are further discussed that will influence the design of future DAISY standards. 2.1 Creation of Sign Language Content To create a full-text/full-audio and full-text/full-video DAISY test-book, the original content of “Laws of the Game 2008/2009” (FIFA...
Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.
Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…
Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.
Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…
Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.
Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…
Lin, Christina Mien-Chun; Gerner de Garcia, Barbara; Chen-Pichler, Deborah
There are over 100 languages in China, including Chinese Sign Language. Given the large population and geographical dispersion of the country's deaf community, sign variation is to be expected. Language barriers due to lexical variation may exist for deaf college students in China, who often live outside their home regions. In presenting an…
Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.
This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.
Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.
Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik
Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnos...
Witko, Joanne; Boyles, Pauline; Smiler, Kirsten; McKee, Rachel
The research described was undertaken as part of a Sub-Regional Disability Strategy 2017-2022 across the Wairarapa, Hutt Valley and Capital and Coast District Health Boards (DHBs). The aim was to investigate deaf New Zealand Sign Language (NZSL) users' quality of access to health services. Findings have formed the basis for developing a 'NZSL plan' for DHBs in the Wellington sub-region. Qualitative data was collected from 56 deaf participants and family members about their experiences of healthcare services via focus group, individual interviews and online survey, which were thematically analysed. Contextual perspective was gained from 57 healthcare professionals at five meetings. Two professionals were interviewed, and 65 staff responded to an online survey. A deaf steering group co-designed the framework and methods, and validated findings. Key issues reported across the health system include: inconsistent interpreter provision; lack of informed consent for treatment via communication in NZSL; limited access to general health information in NZSL and the reduced ability of deaf patients to understand and comply with treatment options. This problematic communication with NZSL users echoes international evidence and other documented local evidence for patients with limited English proficiency. Deaf NZSL users face multiple barriers to equitable healthcare, stemming from linguistic and educational factors and inaccessible service delivery. These need to be addressed through policy and training for healthcare personnel that enable effective systemic responses to NZSL users. Deaf participants emphasise that recognition of their identity as members of a language community is central to improving their healthcare experiences.
Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta
Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.
Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.
Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott
There is no evidence that learning a natural human language is cognitively harmful to children. To the contrary, multilingualism has been argued to be beneficial to all. Nevertheless, many professionals advise the parents of deaf children that their children should not learn a sign language during their early years, despite strong evidence across many research disciplines that sign languages are natural human languages. Their recommendations are based on a combination of misperceptions about (1) the difficulty of learning a sign language, (2) the effects of bilingualism, and particularly bimodalism, (3) the bona fide status of languages that lack a written form, (4) the effects of a sign language on acquiring literacy, (5) the ability of technologies to address the needs of deaf children and (6) the effects that use of a sign language will have on family cohesion. We expose these misperceptions as based in prejudice and urge institutions involved in educating professionals concerned with the healthcare, raising and educating of deaf children to include appropriate information about first language acquisition and the importance of a sign language for deaf children. We further urge such professionals to advise the parents of deaf children properly, which means to strongly advise the introduction of a sign language as soon as hearing loss is detected. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.
Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.
R, Elakkiya; K, Selvamani
Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.
Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.
Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Full Text Available The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words, in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2, compound words with one sign (CW-1, and compound words with two signs (CW-2, but not in derivational words with one sign (DW-1, with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid
We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.
Full Text Available and services. One such mechanism is by embedding animated Sign Language in Web pages. This paper analyses the effectiveness and appropriateness of using this approach by embedding South African Sign Language in the South African National Accessibility Portal...
Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,
This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born
Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…
Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella
Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion…
Haug, Tobias; Herman, Rosalind; Woll, Bencie
This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…
Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.
Do Amaral, Wanessa Machado; de Martino, José Mario
Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.
Halim, Zahid; Abbas, Ghulam
Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.
Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte
The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.
Fuentes, Mariana; Tolchinsky, Liliana
Linguistic descriptions of sign languages are important to the recognition of their linguistic status. These languages are an essential part of the cultural heritage of the communities that create and use them and vital in the education of deaf children. They are also the reference point in language acquisition studies. Ours is exploratory…
Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa
Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.
Memiş, Abbas; Albayrak, Songül
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
Ortega, G.; Morgan, G.
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult
Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi
Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to
Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.
Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Werngren-Elgström, Monica; Brandt, Ase; Iwarsson, Susanne
The purpose of this study was to describe the everyday activities and social contacts among older deaf sign language users, and to investigate relationships between these phenomena and the health and well-being within this group. The study population comprised deaf sign language users, 65 years...... or older, in Sweden. Data collection was based on interviews in sign language, including open-ended questions covering everyday activities and social contacts as well as self-rated instruments measuring aspects of health and subjective well-being. The results demonstrated that the group of participants...... aspects of health and subjective well-being and the frequency of social contacts with family/relatives or visiting the deaf club and meeting friends. It is concluded that the variety of activities at the deaf clubs are important for the subjective well-being of older deaf sign language users. Further...
Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡
Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.
Haug, T.; Bontempo, K.; Leeson, L.; Napier, J.; Nicodemus, B.; Van den Bogaerde, B.; Vermeerbergen, M.
In this paper, we report interview data from 14 Deaf leaders across seven countries (Australia, Belgium, Ireland, the Netherlands, Switzerland, the United Kingdom, and the United States) regarding their perspectives on signed language interpreters. Using a semistructured survey questionnaire, seven
, particularly sign language users, in HIV-prevention programmes. Keywords: communication, disability, disability studies, hearing impairment, qualitative research, scoping study. African Journal of AIDS Research 2010, 9(3): 307–313 ...
of a digital medium and an existing body of descriptive research on the language, ... ing lexemes and word class in a polysynthetic language, deriving usage ..... higher education, white-collar occupations, the arts, media, and political advo- cacy. ..... and Niemalä explain that if mouth patterns are treated as a formational ele-.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.
In other words, language ideologies associated with languages with ... matters such as ethnic identity, power and prestige, solidarity, distance and social .... To provide for variation, the American informants also vary significantly in age (from 32 ...
Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne
The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…
Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.
Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…
Perniss, Pamela; Özyürek, Asli; Morgan, Gary
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.
Rong, Xue Lan, Ed.; Endo, Russell, Ed.
Asian American Education--Asian American Identities, Racial Issues, and Languages presents groundbreaking research that critically challenges the invisibility, stereotyping, and common misunderstandings of Asian Americans by disrupting "customary" discourse and disputing "familiar" knowledge. The chapters in this anthology…
van den Bogaerde, B.; de Lange, R.; Nicodemus, B.; Metzger, M.
In healthcare, the accuracy of interpretation is the most critical component of safe and effective communication between providers and patients in medical settings characterized by language and cultural barriers. Although medical education should prepare healthcare providers for common issues they
De Meulder, Maartje
This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…
Khokhlova A. Yu.
Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.
Hermans, D.; Knoors, H.E.T.; Verhoeven, L.T.W.
In this article, we will describe the development of an assessment instrument for Sign Language of the Netherlands (SLN) for deaf children in bilingual education programs. The assessment instrument consists of nine computerized tests in which the receptive and expressive language skills of deaf
Corina, David P.; Lawyer, Laurel A.; Cates, Deborah
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624
The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...
Grove, Nicola; Woll, Bencie
Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Mean Foong, Oi; Low, Tang Jung; La, Wai Wan
The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.
Kocab, Annemarie; Pyers, Jennie; Senghas, Ann
Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.
Janke, Vikki; Marshall, Chloë R
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the
Clark, M. Diane; Hauser, Peter C.; Miller, Paul; Kargin, Tevhide; Rathmann, Christian; Guldenoglu, Birkan; Kubus, Okan; Spurgeon, Erin; Israel, Erica
Researchers have used various theories to explain deaf individuals' reading skills, including the dual route reading theory, the orthographic depth theory, and the early language access theory. This study tested 4 groups of children--hearing with dyslexia, hearing without dyslexia, deaf early signers, and deaf late signers (N = 857)--from 4…
Nielsen, Diane Corcoran; Luetke, Barbara; McLean, Meigan; Stryker, Deborah
Research suggests that English-language proficiency is critical if students who are deaf or hard of hearing (D/HH) are to read as their hearing peers. One explanation for the traditionally reported reading achievement plateau when students are D/HH is the inability to hear insalient English morphology. Signing Exact English can provide visual access to these features. The authors investigated the English morphological and syntactic abilities and reading achievement of elementary and middle school students at a school using simultaneously spoken and signed Standard American English facilitated by intentional listening, speech, and language strategies. A developmental trend (and no plateau) in language and reading achievement was detected; most participants demonstrated average or above-average English. Morphological awareness was prerequisite to high test scores; speech was not significantly correlated with achievement; language proficiency, measured by the Clinical Evaluation of Language Fundamentals-4 (Semel, Wiig, & Secord, 2003), predicted reading achievement.
Fikes, Robert Jr.
A large number of black scholars have pursued advanced degrees in the German language, history, and culture. Describes the history of African American interest in the German language and culture, highlighting various black scholars who have studied German over the years. Presents data on African Americans in German graduate programs and examines…
Stellenbosch Papers in Linguistics. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 30 (1996) >. Log in or Register to get access to full text downloads.
Hall, Wyatte C
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
A. A. Karpov
Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.
Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng
Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.
Östling, Robert; Börstell, Carl; Courtaux, Servane
We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.
Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia
The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.
Vargas, Lorena P; Barba, Leiner; Torres, C O; Mattos, L
This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.
Luiz Daniel Rodrigues Dinarte
Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.
Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik
There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen's kappa and "observed agreement". There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and "observed agreement" was above 88%. The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language.
Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen’s kappa and “observed agreement”. Results There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and “observed agreement” was above 88%. Conclusion The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language. PMID:24886297
Silvia Teresinha Frizzarini
Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Skachkova Irina Ivanovna
Full Text Available The article is a continuation of studies of the theoretical aspects of language policy in a multinational state in the U.S. example. The study of language policy in highly developed countries can make a considerable contribution to solving language and national problems of the states that have begun democratic transformation not long ago. Now, some politicians and scientists again raise the question of the recognition of English official, despite the fact that English is the official language, de facto and this status is not threatened. Therefore, using the statistical method, and the analysis of the collected data and documentary sources, the author examines the classification of statements of U.S. researchers on the need of the state language policy in the U.S., the history of debates and legal disputes over the language policy of the state language, different points of view as to why the founding fathers did not secure the official status of English in the constitution. The author also discusses the differences between assimilation and multicultural model of the state. In conclusion, the author says that minority groups are now realizing the value of their languages and making great efforts to save them. Status of the English language is currently not threatened, so the desire of many scientists and politicians to legalize the official status of the English language is most likely due to the approval of the English language as a national symbol.
Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy
This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018
Lu, Jenny; Jones, Anna; Morgan, Gary
There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…
Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Tiago Hermano Breunig
Full Text Available When inquiring the sign “?”, Flusser postulates that meaning is “one of the main problems of the present times thought.” From the sign above, Flusser differentiates meaning and sense, which defines as “what means”. Thus, the problem of meaning converges with the problem of thought itself, since, according to Flusser, all thoughts come from a tautology, i.e., what “means nothing”. If the understanding of meaning implies the musical aspects of the language, as the sign “?”, according to Flusser, music falls “in the same abyss of tautology” as it overcomes the language limit. Flusser believes that the discussion of language limits contributes to the problem of the meaning of music and confesses that among all the existential signs the “?” is the one that articulates better the situation in which we are. It is in this sense, in this “Stimmung”, as Flusser says about the meaning of the sign “?”, that this paper aims to reflect, from the problem of meaning, on the relationship between music and poetry contemporary to Flusser.
Dean, Robyn K.; Pollard, Robert Q., Jr.
This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…
Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly
This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Mohd Hanafi Mohd Yasin
Full Text Available This research is regarding the readiness of typical student in communication by using sign language in Hearing Impairment Integration Programme. There were 60 typical students from a Special Education Integration Programme of secondary school in Malacca were chosen as research respondents. The instrument of the research was a set of questionnaire which consisted of four parts, namely Student’s demography (Part A, Student’s knowledge (Part B, Student’s ability to communicate (Part C and Student’s interest to communicate (Part D. The questionnaire was adapted from the research of Asnul Dahar and Rabiah's 'The Readiness of Students in Following Vocational Subjects at Jerantut District, Rural Secondary School in Pahang'. Descriptive analysis was used to analysis the data. Mean score was used to determine the level of respondents' perception of each question. The findings showed a positive relationship between typical students towards communication medium by using sign language. Typical students were seen to be interested in communicating using sign language and were willing to attend the Sign Language class if offered.
MacKinnon, Gregory; Soutar, Iris
The Jamaican Association for the Deaf, in their responsibilities to oversee education for individuals who are deaf in Jamaica, has demonstrated an urgent need for a dictionary that assists students, educators, and parents with the practical use of "Jamaican Sign Language." While paper versions of a preliminary resource have been explored…
This study compared the effects of Picture Exchange Communication System (PECS) and sign language training on the acquisition of mands (requests for preferred items) of students with autism. The study also examined the differential effects of each modality on students' acquisition of vocal behavior. Participants were two elementary school students…
Karpouzis, K.; Caridakis, G.; Fotinea, S.-E.; Efthimiou, E.
In this paper, we present how creation and dynamic synthesis of linguistic resources of Greek Sign Language (GSL) may serve to support development and provide content to an educational multitask platform for the teaching of GSL in early elementary school classes. The presented system utilizes standard virtual character (VC) animation technologies…
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Woodward, James; Hoa, Nguyen Thi
This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…
Ирина Ивановна Скачкова
Full Text Available The article is a continuation of studies of the theoretical aspects of language policy in a multinational state in theU.S.example. The study of language policy in highly developed countries can make a considerable contribution to solving language and national problems of the states that have begun democratic transformation not long ago. Now, some politicians and scientists again raise the question of the recognition of English official, despite the fact that English is the official language, de facto and this status is not threatened. Therefore, using the statistical method, and the analysis of the collected data and documentary sources, the author examines the classification of statements of U.S. researchers on the need of the state language policy in the U.S., the history of debates and legal disputes over the language policy of the state language, different points of view as to why the founding fathers did not secure the official status of English in the constitution. The author also discusses the differences between assimilation and multicultural model of the state. In conclusion, the author says that minority groups are now realizing the value of their languages and making great efforts to save them. Status of the English language is currently not threatened, so the desire of many scientists and politicians to legalize the official status of the English language is most likely due to the approval of the English language as a national symbol.DOI: http://dx.doi.org/10.12731/2218-7405-2013-3-25
Among American Indian Pueblo tribes, community-based language revitalisation initiatives have been established in response to a growing language shift towards English. This has been most prominent among school age children, prompting some tribes to extend tribal language programmes into local public schools. For centuries, the transmission of…
Full Text Available In the article the question of the existence of the Croatian literary language in the semiotic space, i.e. the system of culture, is taken into consideration. In order to affirm the idea of the justification of the very term Croatian language, and thus acceptance of the thesis of the existence of such a language, this argumentation is directed towards theoretical investigation in the semiotic field. There is an attempt to envisage that discussions in the post-Yugoslav linguistics are not the problem, conventionally speaking, ‘ontological’ but ‘epistemological’. Thus, it is not important the question whether the Croatian language or any other language, e.g. Montenegrin, exists but rather the following question: what does it mean that literary language exists or does not exist?
Bonvillian, John D.; And Others
The relationship between sign language rehearsal and written free recall was examined by having deaf college students rehearse the sign language equivalents of printed English words. Studies of both immediate and delayed memory suggested that word recall increased as a function of total rehearsal frequency and frequency of appearance in rehearsal…
Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer
The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
André Nogueira Xavier
Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.
Knapp, Heather Patterson; Corina, David P.
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…
Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de
To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.
Full Text Available The paper discusses ongoing research on the effects of a signing avatar's modeling/rendering features on the perception of sign language animation. It reports a recent study that aimed to determine whether a character's visual style has an effect on how signing animated characters are perceived by viewers. The stimuli of the study were two polygonal characters presenting two different visual styles: stylized and realistic. Each character signed four sentences. Forty-seven participants with experience in American Sign Language (ASL viewed the animated signing clips in random order via web survey. They (1 identified the signed sentences (if recognizable, (2 rated their legibility, and (3 rated the appeal of the signing avatar. Findings show that while character's visual style does not have an effect on subjects' perceived legibility of the signs and sign recognition, it has an effect on subjects' interest in the character. The stylized signing avatar was perceived as more appealing than the realistic one.
Full Text Available In both vocal and sign languages, we can distinguish word-, sentence-, and discourse-level integration in terms of hierarchical processes, which integrate various elements into another higher level of constructs. In the present study, we used magnetic resonance imaging and voxel-based morphometry to test three language tasks in Japanese Sign Language (JSL: word-level (Word, sentence-level (Sent, and discourse-level (Disc decision tasks. We analyzed cortical activity and gray matter volumes of Deaf signers, and clarified three major points. First, we found that the activated regions in the frontal language areas gradually expanded in the dorso-ventral axis, corresponding to a difference in linguistic units for the three tasks. Moreover, the activations in each region of the frontal language areas were incrementally modulated with the level of linguistic integration. These dual mechanisms of the frontal language areas may reflect a basic organization principle of hierarchically integrating linguistic information. Secondly, activations in the lateral premotor cortex and inferior frontal gyrus were left-lateralized. Direct comparisons among the language tasks exhibited more focal activation in these regions, suggesting their functional localization. Thirdly, we found significantly positive correlations between individual task performances and gray matter volumes in localized regions, even when the ages of acquisition of JSL and Japanese were factored out. More specifically, correlations with the performances of the Word and Sent tasks were found in the left precentral/postcentral gyrus and insula, respectively, while correlations with those of the Disc task were found in the left ventral inferior frontal gyrus and precuneus. The unification of functional and anatomical studies would thus be fruitful for understanding human language systems from the aspects of both universality and individuality.
Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana
This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step—and the crucial step—toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means. PMID:24917837
Ashraf, Md Izhar; Sinha, Sitabhra
Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon.
Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon. PMID:29342176
Full Text Available The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.
BADEA Florina; PETRESCU Ligia
The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.
This podcast is based on the January 2017 CDC Vital Signs report. Diabetes is the leading cause of kidney failure and Native Americans have a greater chance of having diabetes than any other racial group in the U.S. Learn how to manage your diabetes to delay or prevent kidney failure.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…
Izzo, Herbert J.
Drawing on the analogy between the linguistic Romanization of Europe and the Hispanization of America, this paper attempts to investigate the validity of the so-called substream theory to account for the development and diversification of the Romance languages. Phonetic peculiarities of Spanish in America are analyzed, and it is concluded that…
Kontra, Edit H.; Csizer, Kata
The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…
This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.
Full Text Available Background and Aim : Learning and memory are two high level cognitive performances in human that hearing loss influences them. In our study, mini-mental state examination (MMSE and Ray auditory-verbal learning test (RAVLT was conducted to study cognitive stat us and lexical learning and memory in deaf adults using sign language. Methods: This cross-sectional comparative study was conducted on 30 available congenitally deaf adults using sign language in Persian and 46 normal adults aged 19 to 27 years for both sexes, with a minimum of diploma level of education. After mini-mental state examination, Rey auditory-verbal learning test was run through computers to evaluate lexical learning and memory with visual presentation. Results: Mean scores of mini-mental state examination and Rey auditory-verbal learning test in congenitally deaf adults were significantly lower than normal individuals in all scores (p=0.018 except in the two parts of the Rey test. Significant correlation was found between results of two tests just in the normal group (p=0.043. Gender had no effect on test results. Conclusion: Cognitive status and lexical memory and learning in congenitally deaf individuals is weaker than in normal subjects. It seems that using sign language as the main way of communication in deaf people causes poor lexical memory and learning.
Manoranjan, M D; Robinson, J A
Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.
Carolina Hessel Silveira
Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.
Wijayanti Nurul Khotimah
Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.
Language can act as ideology in 2 possible ways: 1) as a major source and embodiment of a group's world view, sanctioning certain forms of behavior and interpretation; and 2) as a symbol of group identity virtually command a group action. (Author)
This paper describes a TeleTandem language exchange project between English speaking Spanish students at Georgia College, USA, and Spanish speaking English students at Universidad de Concepción, Chile. The aim of the project was to promote linguistic skills and intercultural competence through a TeleTandem exchange. Students used Skype and Google…
Bruno Rosario Candelier
Full Text Available The footprint of the Bible in its intellectual and aesthetic expression is manifested in the creation of poetry and fiction. The religious and mystical poetry, and the use of biblical language through the recreation of characters, themes or motifs inspired by the sacred text, are a tribute to the Holy Book and a creative vein of literature inspired by this paradigmatic work of our culture. The biblical language that channel profound teachings and revealed truths through diverse literary figures, has been a fruitful means of creation. Besides intuition and inspiration, in the poetic language flowing the signals of revelation that synthesize perception of consciousness, the metaphysics slope of the existing and the effluvia of Transcendence. In its implementation intervenes the creative power of poetry that the word formalized in images, myths and concepts. In numerous poetic creations there are formal, conceptual and spiritual reminiscent of the Holy Book. It’s prolific the trace of the Bible in literature, culture and spiritual awareness. The word that creates and raises is a melting pot of the aesthetic feeling and spirituality. In fact, the Gospel contains the inspiring principle of Christian mystical literature. By focusing biblical language in poetic creation, we appreciate literary formulas and compositional resources. There is a wisdom and a stylistic inherent in biblical language, which manifests itself in a biblical tone, a biblical image and a biblical technique that the language arts formalized in various forms of creation. Knowing from the biblical heritage is reflected in judgments, prophetic visions, parables, allegories, parallelisms and other resources that have fallen into the lyrical flow. The biblical language embodies a format registered by proverbs, hymns, prayers, metaphors and other expressive resources format. In the biblical text we find various literary forms that have fueled the substance of poetic creation, as
Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium
Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.
Ramesh Mahadev kagalkar
Full Text Available In the world of signing and gestures, lots of analysis work has been done over the past three decades. This has led to a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a restricted vocabulary. In gift state of affairs, human machine interactive systems facilitate communication between the deaf, and hearing impaired in universe things. So as to boost the accuracy of recognition, several researchers have deployed strategies like HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The most purpose of this paper is to investigate these strategies and to effectively compare them, which can alter the reader to succeed in associate in nursing optimum resolution. This creates each, challenges and opportunities for signing recognition connected analysis. Normal 0 false false false DE JA X-NONE
Perniss, Pamela; Lu, Jenny C.; Morgan, Gary; Vigliocco, Gabriella
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on "iconicity" in language, that is, resemblance relationships between form and…
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Nino, Miguel A.
The Hispanic-American, because he or she is bilingual and bicultural, could play an important role in the future economic development of the United States. Declines in steel, automotive, and electronics industries due to foreign competition and market saturation have caused industrial displacement and unemployment. The Maquiladora or Twin Plant…
Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida
To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.
Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.
"Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…
Full Text Available Dr. Paweł Rutkowski is head of the Section for Sign Linguistics at the University of Warsaw. He is a general linguist and a specialist in the field of syntax of natural languages, carrying out research on Polish Sign Language (polski język migowy — PJM. He has been awarded a number of prizes, grants and scholarships by such institutions as the Foundation for Polish Science, Polish Ministry of Science and Higher Education, National Science Centre, Poland, Polish–U.S. Fulbright Commission, Kosciuszko Foundation and DAAD. Dr. Rutkowski leads the team developing the Corpus of Polish Sign Language and the Corpus-based Dictionary of Polish Sign Language, the first dictionary of this language prepared in compliance with modern lexicographical standards. The dictionary is an open-access publication, available freely at the following address: http://www.slownikpjm.uw.edu.pl/en/. This interview took place at eLex 2017, a biennial conference on electronic lexicography, where Dr. Rutkowski was awarded the Adam Kilgarriff Prize and gave a keynote address entitled Sign language as a challenge to electronic lexicography: The Corpus-based Dictionary of Polish Sign Language and beyond. The interview was conducted by Dr. Victoria Nyst from Leiden University, Faculty of Humanities, and Dr. Iztok Kosem from the University of Ljubljana, Faculty of Arts.
Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.
Arias, Graciela; Friberg, Jennifer
The purpose of this study was to identify current practices of school-based speech-language pathologists (SLPs) in the United States for bilingual language assessment and compare them to American Speech-Language-Hearing Association (ASHA) best practice guidelines and mandates of the Individuals with Disabilities Education Act (IDEA, 2004). The study was modeled to replicate portions of Caesar and Kohler's (2007) study and expanded to include a nationally representative sample. A total of 166 respondents completed an electronic survey. Results indicated that the majority of respondents have performed bilingual language assessments. Furthermore, the most frequently used informal and standardized assessments were identified. SLPs identified supports, and barriers to assessment, as well as their perceptions of graduate preparation. The findings of this study demonstrated that although SLPs have become more compliant to ASHA and IDEA guidelines, there is room for improvement in terms of adequate training in bilingual language assessment.
Two astronauts associated with the joint U.S.-USSR Apollo Soyuz Test Project (ASTP) receive instruction in the Russian language during ASTP activity at JSC. They are Robert F. Overmyer, a member of the support team of the American ASTP crew, who is seated at left; and Vance D. Brand (in center), the command module pilot of the American ASTP prime crew. The instructor is Anatoli Forestanko.
Full Text Available This paper describes a TeleTandem language exchange project between English speaking Spanish students at Georgia College, USA, and Spanish speaking English students at Universidad de Concepción, Chile. The aim of the project was to promote linguistic skills and intercultural competence through a TeleTandem exchange. Students used Skype and Google Hangouts for monthly synchronous conversations where they interviewed each other, and exchanged information about their lives and cultures. The project also involved a monthly blog and a video report where students talked about their TeleTandem partner and reflected on their learning experience.
Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni
Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.
Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.
Full Text Available This paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL. Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.
Corina, David P; Knapp, Heather Patterson
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
This podcast is based on the January 2017 CDC Vital Signs report. Diabetes is the leading cause of kidney failure and Native Americans have a greater chance of having diabetes than any other racial group in the U.S. Learn how to manage your diabetes to delay or prevent kidney failure. Created: 1/10/2017 by National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). Date Released: 1/10/2017.
This article explains the use of the origins of American English and the dictionary to teach multiculturalism to elementary school students. It suggests classroom activities that help students explore the cultural roots behind words and appreciate the ways words have been created. Esperanto and the development of an international language are also…
Alex Giovanny Barreto
Full Text Available The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distance of Colombia-UNAD. The didactic proposal focus on the model of the efforts (Gile, specifically in the production and listen efforts. A criticism about translating competence is presented. Minifiction is literary genre with multiple semiotic and philosophical translation possibilities. These literary texts have elements with great potential to render on visual, gestural and spatial depictions of Colombian sign language which is profitable to interpreter training and education. Through El Dinosaurio sign language translation, we concludes with an outline and reflections on the pedagogical and didactic potential of minifiction and depictions in the design of training activities in sign language interpreters.
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten
This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....
Ryumin, D.; Karpov, A. A.
In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.
Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji
The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc
... DEPARTMENT OF EDUCATION Native American and Alaska Native Children in School Program; Office of English Language Acquisition, Language Enhancement, and Academic Achievement for Limited English Proficient Students; Overview Information; Native American and Alaska Native Children in School Program...
Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam
Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.
Two English/Spanish bilingual glossaries define words and phrases found on traffic signs. The first is an extensive alphabetical checklist of sign messages, listed in English with translations in Spanish. Some basic traffic and speed limit rules are included. The second volume, in Spanish-to-English form, is a pocket version designed for American…
Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad
Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
Yenny Rodríguez Hernández
Full Text Available This paper reports the results of an exploratory study whose purpose was to identify and characterize the metaphors in a sample of five videos in Colombian sign language (in Spanish, lsc.The data were analyzed using theoretical contributions from Lakoff and Johnson’s theories (1980 about cognitive metaphors and image schemata, and from Wilcox (2000 and Taub (2001 on double mapping in sign language. The results show a frequency analysis of image schemata and the metaphors present into metaphorical expressions in five autobiographical narratives by five congenital deaf adults. The study concludes that sign language has cognitive metaphors that let deaf people map from a concrete domain to an abstract one in order to build concepts.
Young, Alys; Oram, Rosemary; Dodds, Claire; Nassimi-Green, Catherine; Belk, Rachel; Rogers, Katherine; Davies, Linda; Lovell, Karina
Internationally, few clinical trials have involved Deaf people who use a signed language and none have involved BSL (British Sign Language) users. Appropriate terminology in BSL for key concepts in clinical trials that are relevant to recruitment and participant information materials, to support informed consent, do not exist. Barriers to conceptual understanding of trial participation and sources of misunderstanding relevant to the Deaf community are undocumented. A qualitative, community participatory exploration of trial terminology including conceptual understanding of 'randomisation', 'trial', 'informed choice' and 'consent' was facilitated in BSL involving 19 participants in five focus groups. Data were video-recorded and analysed in source language (BSL) using a phenomenological approach. Six necessary conditions for developing trial information to support comprehension were identified. These included: developing appropriate expressions and terminology from a community basis, rather than testing out previously derived translations from a different language; paying attention to language-specific features which support best means of expression (in the case of BSL expectations of specificity, verb directionality, handshape); bilingual influences on comprehension; deliberate orientation of information to avoid misunderstanding not just to promote accessibility; sensitivity to barriers to discussion about intelligibility of information that are cultural and social in origin, rather than linguistic; the importance of using contemporary language-in-use, rather than jargon-free or plain language, to support meaningful understanding. The study reinforces the ethical imperative to ensure trial participants who are Deaf are provided with optimum resources to understand the implications of participation and to make an informed choice. Results are relevant to the development of trial information in other signed languages as well as in spoken/written languages when
This article presents a modular activity on the neurobiology of sign language that engages undergraduate students in reading and analyzing the primary functional magnetic resonance imaging (fMRI) literature. Drawing on a seed empirical article and subsequently published critique and rebuttal, students are introduced to a scientific debate concerning the functional significance of right-hemisphere recruitment observed in some fMRI studies of sign language processing. The activity requires minimal background knowledge and is not designed to provide students with a specific conclusion regarding the debate. Instead, the activity and set of articles allow students to consider key issues in experimental design and analysis of the primary literature, including critical thinking regarding the cognitive subtractions used in blocked-design fMRI studies, as well as possible confounds in comparing results across different experimental tasks. By presenting articles representing different perspectives, each cogently argued by leading scientists, the readings and activity also model the type of debate and dialogue critical to science, but often invisible to undergraduate science students. Student self-report data indicate that undergraduates find the readings interesting and that the activity enhances their ability to read and interpret primary fMRI articles, including evaluating research design and considering alternate explanations of study results. As a stand-alone activity completed primarily in one 60-minute class block, the activity can be easily incorporated into existing courses, providing students with an introduction both to the analysis of empirical fMRI articles and to the role of debate and critique in the field of neuroscience.
Full Text Available Una parte significativa de la población mexicana es sorda. Esta discapacidad restringe sus habilidades de interacción social con personas que no tienen dicha discapacidad y viceversa. En este artículo presentamos nuestros avances hacia el desarrollo de un traductor Voz-a-Lenguaje-de-Señas del español mexicano para asistir a personas sin discapacidad a interactuarcon personas sordas. La metodología de diseño propuesta considera limitados recursos para(1 el desarrollo del Reconocedor Automático del Habla (RAH mexicano, el cual es el módulo principal del traductor, y (2 el vocabulario del Lenguaje de Señas Mexicano (LSM disponible para representar las oraciones reconocidas. La traducción Voz-a-Lenguaje-de-Señas fue lograda con un nivel de precisión mayor al 97% para usuarios de prueba diferentes de aquellos seleccionados para el entrenamiento del RAH.A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1 the development of the Mexican Automatic Speech Recogniser (ASRsystem, which is the main module in the translator, and (2 the Mexican Sign Language(MSL vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…
Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S
Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Alexander, C. J.; Martin, M.; Grant, G.
Many Native American communities recognize that the retention of their language, and the need to make the language relevant to the technological age we live in, represents one of their largest and most urgent challenges. Almost 70 percent of Navajos speak their tribal language in the home, and 25 per cent do not know English very well. In contrast, only 30 percent of Native Americans as a whole speak their own tribal language in the home. For the Cherokee and the Chippewa, less than 10 percent speak the native language in the home. And for the Navajo, the number of first graders who solely speak English is almost four times higher than it was in 1970. The U.S. Rosetta Project is the NASA contribution to the International Rosetta Mission. The Rosetta stone is the inspiration for the mission’s name. As outlined by the European Space Agency, Rosetta is expected to provide the keys to the primordial solar system the way the original Rosetta Stone provided a key to ancient language. The concept of ancient language as a key provides a theme for this NASA project’s outreach to Native American communities anxious for ways to enhance and improve the numbers of native speakers. In this talk we will present a concept for building on native language as it relates to STEM concepts. In 2009, a student from the Dine Nation interpreted 28 NASA terms for his senior project at Chinle High School in Chinle, AZ. These terms included such words as space telescope, weather satellite, space suit, and the planets including Neptune and Uranus. This work represents a foundation for continued work between NASA and the Navajo Nation. Following approval by the tribal elders, the U.S. Rosetta project would host the newly translated Navajo words on a web-site, and provide translation into both Navajo and English. A clickable map would allow the user to move through all the words, see Native artwork related to the word, and hear audio translation. Extension to very remote teachers in the
Tucker, G. Richard
The American phenomenon of pervasive monolingualism is considered, and potential implications of the North American Free Trade Agreement are described. Five second-language learning/teaching areas are projected: language for specific purposes; obligatory language study; exchange programs; technological advances; and information resources.…
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Children with good phonological awareness (PA) are often good word readers. Here, we asked whether Swedish deaf and hard-of-hearing (DHH) children who are more aware of the phonology of Swedish Sign Language, a language with no orthography, are better at reading words in Swedish. We developed the Cross-modal Phonological Awareness Test (C-PhAT) that can be used to assess PA in both Swedish Sign Language (C-PhAT-SSL) and Swedish (C-PhAT-Swed), and investigated how C-PhAT performance was related to word reading as well as linguistic and cognitive skills. We validated C-PhAT-Swed and administered C-PhAT-Swed and C-PhAT-SSL to DHH children who attended Swedish deaf schools with a bilingual curriculum and were at an early stage of reading. C-PhAT-SSL correlated significantly with word reading for DHH children. They performed poorly on C-PhAT-Swed and their scores did not correlate significantly either with C-PhAT-SSL or word reading, although they did correlate significantly with cognitive measures. These results provide preliminary evidence that DHH children with good sign language PA are better at reading words and show that measures of spoken language PA in DHH children may be confounded by individual differences in cognitive skills. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Angeles, Bianca C.
Filipinos are one of the biggest minority populations in California, yet there are limited opportunities to learn the Filipino language in public schools. Further, schools are not able to nurture students’ heritage languages because of increased emphasis on English-only proficiency. The availability of heritage language classes at the university level – while scarce – therefore becomes an important space for Filipino American students to (re)learn and (re)discover their language and identity....
Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.
Keywords: Taboo, types of taboo words, types of anger express, Crank 2. In communication, people use many language. Sometimes people use formal and informal language in communication. Many people communicate with inapproriate language as speaking with friends or family by using impolite language for example taboo language.As we know, taboo language is an unfavourable language in society. But, taboo language becomes common phenomenon today. People free to speak what they want, there are many p...
This article responds to two long-standing dilemmas that limit the effectiveness of language education for students who speak and write in African American Language (AAL): (1) the gap between theory and research on AAL and classroom practice, and (2) the need for critical language pedagogies. This article presents the effectiveness of a critical…
John Mathew Martin Poothullil
Full Text Available Universal Design in Media as a strategy to achieve accessibility in digital television started in Spain in 1997 with the digitalization of satellite platforms (MuTra, 2006. In India, a conscious effort toward a strategy for accessible media format in digital television is yet to be made. Advertising in India is a billion dollar industry (Adam Smith, 2008 and digital television provides a majority of the space for it. This study investigated the effects of advertisement in accessible format, through the use of captioning and Indian sign language (ISL, on hearing and deaf people. “Deaf (capital letter ‘D’ used for culturally Deaf and hearing” viewers watched two short recent advertisements with and without accessibility formats in a randomized order. Their reactions were recorded on a questionnaire developed for the purpose of the study. Eighty-four persons participated in this study of which 42 were deaf persons. Analysis of the data showed that there was difference in the effects of accessible and nonaccessible formats of advertisement on the “Deaf and Hearing” viewers. The study showed that accessible formats increased the comprehension of the message of the advertisement and use of ISL helped deaf persons to understand concepts better. While captioning increased the perception of the hearing persons to correlate with listening and understanding the concept of the advertisement, the deaf persons correlated watching the ISL interpreter with understanding the concept of the advertisement. Placement of the ISL interpreter in the screen and color of the fonts used for captioning were also covered under the study. However, the placement of the ISL interpreter and color of fonts in the screen and their correlation with comprehension of the advertisement by hearing and deaf persons did not show much of significance in the result of the study.
Full Text Available The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI service in health-care.The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs.Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013.In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the
Zulu, Tryphine; Heap, Marion; Sinanovic, Edina
The World Health Organisation estimates disabling hearing loss to be around 5.3%, while a study of hearing impairment and auditory pathology in Limpopo, South Africa found a prevalence of nearly 9%. Although Sign Language Interpreters (SLIs) improve the communication challenges in health care, they are unaffordable for many signing Deaf people and people with disabling hearing loss. On the other hand, there are no legal provisions in place to ensure the provision of SLIs in the health sector in most countries including South Africa. To advocate for funding of such initiatives, reliable cost estimates are essential and such data is scarce. To bridge this gap, this study estimated the costs of providing such a service within a South African District health service based on estimates obtained from a pilot-project that initiated the first South African Sign Language Interpreter (SASLI) service in health-care. The ingredients method was used to calculate the unit cost per SASLI-assisted visit from a provider perspective. The unit costs per SASLI-assisted visit were then used in estimating the costs of scaling up this service to the District Health Services. The average annual SASLI utilisation rate per person was calculated on Stata v.12 using the projects' registry from 2008-2013. Sensitivity analyses were carried out to determine the effect of changing the discount rate and personnel costs. Average Sign Language Interpreter services' utilisation rates increased from 1.66 to 3.58 per person per year, with a median of 2 visits, from 2008-2013. The cost per visit was US$189.38 in 2013 whilst the estimated costs of scaling up this service ranged from US$14.2million to US$76.5million in the Cape Metropole District. These cost estimates represented 2.3%-12.2% of the budget for the Western Cape District Health Services for 2013. In the presence of Sign Language Interpreters, Deaf Sign language users utilise health care service to a similar extent as the hearing population
Tiwana, Ravneet Kaur
It has been claimed that there is only one language, the English language in the United States, because America is not a "polyglot boardinghouse ..." (Portes and Rumbaut 196). The fact is that America has always been a multilingual society, even though this mythical notion of a monolingual American identity reflecting American loyalty…
Henning, Marcus A; Krägeloh, Christian U; Sameshima, Shizue; Shepherd, Daniel; Shepherd, Gregory; Billington, Rex
This paper aims to: (1) explore usage and accessibility of sign language interpreters, (2) appraise the levels of quality of life (QOL) of deaf adults residing in New Zealand, and (3) consider the impact of access to and usage of sign language interpreters on QOL. Sixty-eight deaf adults living in New Zealand participated in this study. Two questionnaires were employed: a 12-item instrument about access and use of New Zealand sign language interpreters and the abbreviated version of the World Health Organization Quality of Life questionnaire (WHOQOL-BREF). The results showed that 39% of this sample felt that they were unable to adequately access interpreting services. Moreover, this group scored significantly lower than a comparable hearing sample on all four WHOQOL-BREF domains. Finally, the findings revealed that access to good quality interpreters were associated with access to health services, transport issues, engagement in leisure activities, gaining more information, mobility and living in a healthy environment. These findings have consequences for policy makers and agencies interested in ensuring that there is an equitable distribution of essential services for all groups within New Zealand which inevitably has an impact on the health of the individual.
Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu
Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.
Vanessa Regina de Oliveira Martins
Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.
Vanessa Regina de Oliveira Martins
Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.
Woolfe, Tyron; Herman, Rosalind; Roy, Penny; Woll, Bencie
There is a dearth of assessments of sign language development in young deaf children. This study gathered age-related scores from a sample of deaf native signing children using an adapted version of the MacArthur-Bates CDI (Fenson et al., 1994). Parental reports on children's receptive and expressive signing were collected longitudinally on 29 deaf native British Sign Language (BSL) users, aged 8-36 months, yielding 146 datasets. A smooth upward growth curve was obtained for early vocabulary development and percentile scores were derived. In the main, receptive scores were in advance of expressive scores. No gender bias was observed. Correlational analysis identified factors associated with vocabulary development, including parental education and mothers' training in BSL. Individual children's profiles showed a range of development and some evidence of a growth spurt. Clinical and research issues relating to the measure are discussed. The study has developed a valid, reliable measure of vocabulary development in BSL. Further research is needed to investigate the relationship between vocabulary acquisition in native and non-native signers.
Senegal adopted French as the country's sole official language at the time of independence in 1960, since when the language has been used in administration and other formal domains. Similarly, French is employed throughout the formal education system as the language of instruction. Since the 1990s, however, government has mounted an ambitious…
Full Text Available Este estudio describe las características del lenguaje metafórico de personas sordas chilenas y su impacto en la comprensión lingüística. La relevancia de esta pregunta radica en la escasez de investigaciones realizadas, particularmente a nivel nacional. Se desarrolló un estudio cualitativo en base a análisis de videos de sujetos sordos en habla espontánea. Se confeccionó una lista de metáforas conceptuales y no conceptuales en Lengua de Señas Chilena. Posteriormente se evaluó su comprensión en un grupo de sujetos sordos, educados con modalidad comunicativa de lengua de señas. Los resultados obtenidos permiten observar la existencia de metáforas propias de la cultura sorda. Ellas serían coherentes con las particulares experiencias de los sujetos sordos y no necesariamente concuerdan con el lenguaje oral.The present study examined the characteristics of Chilean deaf people's metaphoric language and its relevance in linguistic comprehension. This key question is based in the scarcity of studies conducted in Chile. A qualitative study was developed, on the basis of analysis of videos of Chilean deaf people spontaneous sign language. A list of conceptual and no conceptual metaphors in Chilean sign language was developed. The comprehension of these metaphors was evaluated in a group of deaf subjets, educated using sign language communication. The results identify the existence of metaphors of the deaf culture. These methaphors would be coherent with the particular experiences of deaf subjets and do not necessarily agree with spoken language.
Newman, Michael; Patino-Santos, Adriana; Trenchs-Parera, Mireia
This study explores the connections between language policy implementation in three Barcelona-area secondary schools and the language attitudes and behaviors of Spanish-speaking Latin American newcomers. Data were collected through interviews and ethnographic participant observation document indexes of different forms of language socialization…
Bebko, James M.
Review of literature on indicators of the effectiveness of language intervention programs for autistic children showed that mitigation in echolalia was a critical characteristic, as it implied that the prerequisites for language were accessible through speech. Children whose speech ranged from mutism to unmitigated echolalia had a more negative…
Lucas, Ceil; Valli, Clayton
Reports on one aspect of an ongoing study of language contact in the American deaf community. The ultimate goal of the study is a linguistic description of contact signing and a reexamination of claims that it is a pidgin. Patterns of language use are reviewed and the role of demographic information in judgments is examined. (29 references) (GLR)
Eveline Boers-Visker; Annemiek Hammer; Dr. Beppie van den Bogaerde
Introduction The CEFR offers a framework for language teaching, learning and assessment for L2 learners. Importantly, the CEFR draws on a learner’s communicative language competence rather than linguistic competence (e.g. vocabulary, grammar). As such, the implementation of the CEFR in our four
Malaia, Evie; Wilbur, Ronnie B; Milkovic, Marina
Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.
An investigation of interpretive graphics was conducted in 2005 at two mid-sized AZA-accredited zoos, Lowry Park Zoo, Tampa, Florida and Knoxville Zoo, Knoxville, Tennessee. The Lowry Park Zoo study investigated signs at a red-tailed hawk and sandhill crane exhibit. Combination signs and wordless signs were more effective helping visitors see animals, increasing holding time, and number of engagements than treatments of no signs, or signs with words only. A second study, at Knoxville Zoo, tested combination and wordless signs in a children's zoo, investigating 31 signs at a 3.5-acre exhibit. Comparisons of visitors seeing the animals/using interactive exhibit elements, holding time, and engagement activities, showed wordless signs were more effective than combination signs. Differences in gender ratio, age, group size, and other demographics were not significant. Visit motivation differed between zoos, with visitors from Lowry Park Zoo more often articulating reason for a visit as wanting to see animals. Visitors at Knoxville Zoo most often said they wanted to spend time with family and friends. Differences in potential for naturalist intelligence were probably related to local practices rather than to innate differences in naturalist intelligence. The number of communities in Florida that regulate pet ownership and provide lawn service could account for the lower number of people who have pets and plants. At both institutions, behaviors supported educational theories. The importance of signs as advanced organizers was shown where signs were removed at the bird exhibit at Lowry Park Zoo, with fewer visitors seeing the animals. Social interaction was noted at both zoos, with intra- and inter-group conversations observed. If naturalist intelligence is necessary to see animals, visitors run a continuum. Some are unable to see animals with signs and assistance from other visitors; others see animals with little difficulty. The importance of honing naturalist
van Berkel-van Hoof, Lian; Hermans, Daan; Knoors, Harry; Verhoeven, Ludo
Augmentative signs may facilitate word learning in children with vocabulary difficulties, for example, children who are Deaf/Hard of Hearing (DHH) and children with Specific Language Impairment (SLI). Despite the fact that augmentative signs may aid second language learning in populations with a typical language development, empirical evidence in favor of this claim is lacking. We aim to investigate whether augmentative signs facilitate word learning for DHH children, children with SLI, and typically developing (TD) children. Whereas previous studies taught children new labels for familiar objects, the present study taught new labels for new objects. In our word learning experiment children were presented with pictures of imaginary creatures and pseudo words. Half of the words were accompanied by an augmentative pseudo sign. The children were tested for their receptive word knowledge. The DHH children benefitted significantly from augmentative signs, but the children with SLI and TD age-matched peers did not score significantly different on words from either the sign or no-sign condition. These results suggest that using Sign-Supported speech in classrooms of bimodal bilingual DHH children may support their spoken language development. The difference between earlier research findings and the present results may be caused by a difference in methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.
British Sign Language (BSL) signers use a variety of structures, such as constructed action (CA), depicting constructions (DCs), or lexical verbs, to represent action and other verbal meanings. This study examines the use of these verbal predicate structures and their gestural counterparts, both separately and simultaneously, in narratives by deaf children with various levels of exposure to BSL (ages 5;1 to 7;5) and deaf adult native BSL signers. Results reveal that all groups used the same types of predicative structures, including children with minimal BSL exposure. However, adults used CA, DCs, and/or lexical signs simultaneously more frequently than children. These results suggest that simultaneous use of CA with lexical and depicting predicates is more complex than the use of these predicate structures alone and thus may take deaf children more time to master. PMID:23670881
Over the past 30-years linguists have been witnessing the birth and evolution of a language, Idioma de Señas de Nicaragua (ISN), in Nicaragua, and have initiated and documented the syntax and grammar of this new language. Research is only beginning to emerge on the implications of ISN on the education of deaf/hard of hearing children in Nicaragua.…
Campbell, Ruth; Capek, Cheryl M; Gazarian, Karine; MacSweeney, Mairéad; Woll, Bencie; David, Anthony S; McGuire, Philip K; Brammer, Michael J
In this study, the first to explore the cortical correlates of signed language (SL) processing under point-light display conditions, the observer identified either a signer or a lexical sign from a display in which different signers were seen producing a number of different individual signs. Many of the regions activated by point-light under these conditions replicated those previously reported for full-image displays, including regions within the inferior temporal cortex that are specialised for face and body-part identification, although such body parts were invisible in the display. Right frontal regions were also recruited - a pattern not usually seen in full-image SL processing. This activation may reflect the recruitment of information about person identity from the reduced display. A direct comparison of identify-signer and identify-sign conditions showed these tasks relied to a different extent on the posterior inferior regions. Signer identification elicited greater activation than sign identification in (bilateral) inferior temporal gyri (BA 37/19), fusiform gyri (BA 37), middle and posterior portions of the middle temporal gyri (BAs 37 and 19), and superior temporal gyri (BA 22 and 42). Right inferior frontal cortex was a further focus of differential activation (signer>sign). These findings suggest that the neural systems supporting point-light displays for the processing of SL rely on a cortical network including areas of the inferior temporal cortex specialized for face and body identification. While this might be predicted from other studies of whole body point-light actions (Vaina, Solomon, Chowdhury, Sinha, & Belliveau, 2001) it is not predicted from the perspective of spoken language processing, where voice characteristics and speech content recruit distinct cortical regions (Stevens, 2004) in addition to a common network. In this respect, our findings contrast with studies of voice/speech recognition (Von Kriegstein, Kleinschmidt, Sterzer
Gonzales, Angela A.; Garroutte, Eva; Ton, Thanh G.N.; Goldberg, Jack; Buchwald, Dedra
American Indians have one of the lowest colorectal cancer (CRC) screening rates for any racial/ethnic group in the U.S., yet reasons for their low screening participation are poorly understood. Limited English language use may create barriers to cancer screening in Hispanic and other ethnic minority immigrant populations; the extent to which this hypothesis is generalizable to American Indians is unknown. We examine whether tribal (indigenous) language use is associated with knowledge and use...
Discusses the history of problems encountered by Hispanic writers, who can enrich the English language and American literature and instill pride in the written words of Spanish and its many dialects by embracing two different philosophies, two different languages, and two different cultures. (AN)
Qi, Cathy H.; Kaiser, Ann P.; Marley, Scott C.; Milan, Stephanie
The purposes of the study were to determine (a) the ability of two spontaneous language measures, mean length of utterance in morphemes (MLU-m) and number of different words (NDW), to identify African American preschool children at low and high levels of language ability; (b) whether child chronological age was related to the performance of either…
Garrity, April W.; Oetting, Janna B.
Purpose: To examine 3 forms ("am," "is," "are") of auxiliary BE production by African American English (AAE)-speaking children with and without specific language impairment (SLI). Method: Thirty AAE speakers participated: 10 six-year-olds with SLI, 10 age-matched controls, and 10 language-matched controls. BE production was examined through…
Craig, Holly K.; Washington, Julie A.; Thompson, Connie A.
Reference profiles for characterizing the language abilities of elementary-grade African American students are important for assessment and instructional planning. H. K. Craig and J. A. Washington (2002) reported performance for 100 typically developing preschoolers and kindergartners on 5 traditional language measures: mean length of…
In "True to the Language Game", Keith Gilyard, one of the major African American figures to emerge in language and cultural studies, makes his most seminal work available in one volume. This collection of new and previously published essays contains Gilyard's most relevant scholarly contributions to deliberations about linguistic diversity,…
Hamel, Rainer Enrique; Álvarez López, Elisa; Carvalhal, Tatiana Pereira
This article starts with an overview of the sociolinguistic situation in Latin America as a context for language policy and planning (LPP) decisions in the academic field. Then it gives a brief overview of the language policy challenges faced by universities to cope with neoliberal internationalisation. A conceptualisation of the domain as a…
Courtin, C; Hervé, P-Y; Petit, L; Zago, L; Vigneau, M; Beaucousin, V; Jobard, G; Mazoyer, B; Mellet, E; Tzourio-Mazoyer, N
"Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and spatial-classifier signs. We used functional magnetic resonance imaging (fMRI) to compare the neural correlates of topographic discourse and highly iconic structures in French Sign Language (LSF) in six hearing native signers, children of deaf adults (CODAs), and six LSF-naïve monolinguals. LSF materials consisted of videos of a lecture excerpt signed without spatially organized discourse or highly iconic structures (Lect LSF), a tale signed using highly iconic structures (Tale LSF), and a topographical description using a diagrammatic format and spatial-classifier signs (Topo LSF). We also presented texts in spoken French (Lect French, Tale French, Topo French) to all participants. With both languages, the Topo texts activated several different regions that are involved in mental navigation and spatial working memory. No specific correlate of LSF spatial discourse was evidenced. The same regions were more activated during Tale LSF than Lect LSF in CODAs, but not in monolinguals, in line with the presence of signing-space structure in both conditions. Motion processing areas and parts of the fusiform gyrus and precuneus were more active during Tale LSF in CODAs; no such effect was observed with French or in LSF-naïve monolinguals. These effects may be associated with perspective-taking and acting during personal transfers. 2010 Elsevier Inc. All rights reserved.
Liu, Hsiu Tan; Squires, Bonita; Liu, Chun Jung
We can gain a better understanding of short-term memory processes by studying different language codes and modalities. Three experiments were conducted to investigate: (a) Taiwanese Sign Language (TSL) digit spans in Chinese/TSL hearing bilinguals (n = 32); (b) American Sign Language (ASL) digit spans in English/ASL hearing bilinguals (n = 15);…
Jansson, Karin, Ed.
A project in Sweden focuses on the early linguistic development of preschool deaf children in families where the parents are also deaf. The School for the Deaf in Sweden is involved with describing the Swedish language as it appears to a deaf learner, a description to be used as a basis for teacher training and inservice in the teaching of the…
Stempler, Amy F.; Polger, Mark Aaron
Signage represents more than directions or policies; it is informational, promotional, and sets the tone of the environment. To be effective, signage must be consistent, concise, and free of jargon and punitive language. An efficient assessment of signage should include a complete inventory of existing signage, including an analysis of the types…
Lazutina, Tatyana V.; Pupysheva, Irina N.; Shcherbinin, Mikhail N.; Baksheev, Vladimir N.; Patrakova, Galina V.
This article examines art in the semiotic aspect. The aim of research is to identify the specificity of the language of architecture as a special form of symbolic art meaning the process of granting the symbolic value of aesthetic phenomena caused by the cultural and historical context allowing transmitting the values represented at the level of…
items from the other, there is very often a symbolic value in switching to another language. Thus ... matters such as ethnic identity, power and prestige, solidarity, distance and social ... Also, a higher level of education often coincides with more.
Pearson, Barbara Zurer; Conner, Tracy; Jackson, Janice E
Language difference among speakers of African American English (AAE) has often been considered language deficit, based on a lack of understanding about the AAE variety. Following Labov (1972), Wolfram (1969), Green (2002, 2011), and others, we define AAE as a complex rule-governed linguistic system and briefly discuss language structures that it shares with general American English (GAE) and others that are unique to AAE. We suggest ways in which mistaken ideas about the language variety add to children's difficulties in learning the mainstream dialect and, in effect, deny them the benefits of their educational programs. We propose that a linguistically informed approach that highlights correspondences between AAE and the mainstream dialect and trains students and teachers to understand language varieties at a metalinguistic level creates environments that support the academic achievement of AAE-speaking students. Finally, we present 3 program types that are recommended for helping students achieve the skills they need to be successful in multiple linguistic environments.
Cripps, Jody H.; Cooper, Sheryl B.; Supalla, Samuel J.; Evitts, Paul M.
Deaf individuals who use American Sign Language (ASL) are rarely the focus of professionals in speech-language pathology. Although society is widely thought of in terms of those who speak, this norm is not all-inclusive. Many signing individuals exhibit disorders in signed language and need treatment much like their speaking peers. Although there…
Tang, Hao; Shimizu, Robin; Chen, Moon S
The authors documented California's tobacco control initiatives for Asian Americans and the current tobacco use status among Asian subgroups and provide a discussion of the challenges ahead. The California Tobacco Control Program has employed a comprehensive approach to decrease tobacco use in Asian Americans, including ethnic-specific media campaigns, culturally competent interventions, and technical assistance and training networks. Surveillance of tobacco use among Asian Americans and the interpretation of the results have always been a challenge. Data from the 2001 The California Health Interview Survey (CHIS) were analyzed to provide smoking prevalence estimates for all Asian Americans and Asian-American subgroups, including Korean, Filipino, Japanese, South Asian, Chinese, and Vietnamese. Current smoking prevalence was analyzed by gender and by English proficiency level. Cigarette smoking prevalence among Asian males in general was almost three times of that among Asian females. Korean and Vietnamese males had higher cigarette smoking prevalence rates than males in other subgroups. Although Asian females in general had low smoking prevalence rates, significant differences were found among Asian subgroups, from 1.1% (Vietnamese) to 12.7% (Japanese). Asian men who had high English proficiency were less likely to be smokers than men with lower English proficiency. Asian women with high English proficiency were more likely to be smokers than women with lower English proficiency. Smoking prevalence rates among Asian Americans in California differed significantly on the basis of ethnicity, gender, and English proficiency. English proficiency seemed to have the effect of reducing smoking prevalence rates among Asian males but had just the opposite effect among Asian females. Cancer 2005. (c) 2005 American Cancer Society.
Marshall, Chloë R; Hobsbaum, Angela
Children who are learning English as an Additional Language (EAL) may start school with smaller vocabularies than their monolingual peers. Given the links between vocabulary and academic achievement, it is important to evaluate interventions that are designed to support vocabulary learning in this group of children. To evaluate an intervention, namely Sign-Supported English (SSE), which uses conventionalized manual gestures alongside spoken words to support the learning of English vocabulary by children with EAL. Specifically, the paper investigates whether SSE has a positive impact on Reception class children's vocabulary development over and above English-only input, as measured over a 6-month period. A total of 104 children aged 4-5 years were recruited from two neighbouring schools in a borough of Outer London. A subset of 66 had EAL. In one school, the teachers used SSE, and in the other school they did not. Pupils in each school were tested at two time points (the beginning of terms 1 and 3) using three different assessments of vocabulary. Classroom-based observations of the teachers' and pupils' manual communication were also carried out. Results of the vocabulary assessments revealed that using SSE had no effect on how well children with EAL learnt English vocabulary: EAL pupils from the SSE school did not learn more words than EAL pupils at the comparison school. SSE was used in almost half of the teachers' observations in the SSE school, while spontaneous gestures were used with similar frequency by teachers in the comparison school. There are alternative explanations for the results. The first is that the use of signs alongside spoken English does not help EAL children of this age to learn words. Alternatively, SSE does have an effect, but we were unable to detect it because (1) teachers in the comparison school used very rich natural gesture and/or (2) teachers in the SSE school did not know enough BSL and this inhibited their use of spontaneous gesture
Full Text Available The text deals with the signs in the Czech Sign Language for the basic colours — white, black, red, green, yellow, blue, brown and grey in the diachronic point of view. On the basis of historical written description of these signs from 1834–1907 the motivation of the signs is being analysed (the signs were derived from the typical object of the particular colour as well as the slow lexicalization and form (especially the components of the signs — the place of articulation, handshape and movement. At the same time, the historical signs are compared to the current signs and the text provides analysis of the trends in changes of phonological/morphological structures of the signs (place of articulation changes — moving down from the center to the periphery of the face, shortening of the movement, changing the shape of the hand etc.. In addition the text examines the possible relationship of these signs with the signs for colours in the Austrian, German and French Sign Language (the languages that had been used in deaf education at the end of the 18th and 19th centuries according to preserved records. Concerning the historical signs their motivation and form were compared, along with the detail look at the contemporary signs. This is the first look at the Czech Sign Language from the etymological point of view at all.
Odom, Erika C.; Vernon-Feagans, Lynne; Crouter, Ann C.
In this study, observed maternal positive engagement and perception of work-family spillover were examined as mediators of the association between maternal nonstandard work schedules and children’s expressive language outcomes in 231 African American families living in rural households. Mothers reported their work schedules when their child was 24 months of age and children’s expressive language development was assessed during a picture book task at 24 months and with a standardized assessmen...
Cheng, Juan; Chen, Xun; Liu, Aiping; Peng, Hu
Sign language recognition (SLR) is an important communication tool between the deaf and the external world. It is highly necessary to develop a worldwide continuous and large-vocabulary-scale SLR system for practical usage. In this paper, we propose a novel phonology- and radical-coded Chinese SLR framework to demonstrate the feasibility of continuous SLR using accelerometer (ACC) and surface electromyography (sEMG) sensors. The continuous Chinese characters, consisting of coded sign gestures, are first segmented into active segments using EMG signals by means of moving average algorithm. Then, features of each component are extracted from both ACC and sEMG signals of active segments (i.e., palm orientation represented by the mean and variance of ACC signals, hand movement represented by the fixed-point ACC sequence, and hand shape represented by both the mean absolute value (MAV) and autoregressive model coefficients (ARs)). Afterwards, palm orientation is first classified, distinguishing "Palm Downward" sign gestures from "Palm Inward" ones. Only the "Palm Inward" gestures are sent for further hand movement and hand shape recognition by dynamic time warping (DTW) algorithm and hidden Markov models (HMM) respectively. Finally, component recognition results are integrated to identify one certain coded gesture. Experimental results demonstrate that the proposed SLR framework with a vocabulary scale of 223 characters can achieve an averaged recognition accuracy of 96.01% ± 0.83% for coded gesture recognition tasks and 92.73% ± 1.47% for character recognition tasks. Besides, it demonstrats that sEMG signals are rather consistent for a given hand shape independent of hand movements. Hence, the number of training samples will not be significantly increased when the vocabulary scale increases, since not only the number of the completely new proposed coded gestures is constant and limited, but also the transition movement which connects successive signs needs no
Gonzales, Angela A; Garroutte, Eva; Ton, Thanh G N; Goldberg, Jack; Buchwald, Dedra
American Indians have one of the lowest colorectal cancer (CRC) screening rates for any racial/ethnic group in the U.S., yet reasons for their low screening participation are poorly understood. We examine whether tribal language use is associated with knowledge and use of CRC screening in a community-based sample of American Indians. Using logistic regression to estimate the association between tribal language use and CRC test knowledge and receipt we found participants speaking primarily English were no more aware of CRC screening tests than those speaking primarily a tribal language (OR = 1.16 [0.29, 4.63]). Participants who spoke only a tribal language at home (OR = 1.09 [0.30, 4.00]) and those who spoke both a tribal language and English (OR = 1.74 [0.62, 4.88]) also showed comparable odds of receipt of CRC screening. Study findings failed to support the concept that use of a tribal language is a barrier to CRC screening among American Indians.
The present study was designed to compare the effectiveness and efficiency of two discrete trial teaching procedures for teaching receptive language skills to children with autism. While verbal instructions were delivered alone during the first procedure, all verbal instructions were combined with simple gestures and/or signs during the second…
Petitto, Laura Ann; Holowka, Siobhan
Examines whether early simultaneous bilingual language exposure causes children to be language delayed or confused. Cites research suggesting normal and parallel linguistic development occurs in each language in young children and young children's dual language developments are similar to monolingual language acquisition. Research on simultaneous…
This study focused on presenting the fieldwork findings derived from studying North-American missionaries' relational dynamics with the Japanese people, and the strategies that impacted their language-culture learning. This study also focused on applying the fieldwork findings towards the creation of a coaching model designed to help missionaries…
Hashimoto, Kumi; Lee, Jin Sook
This article documents the heritage-language (HL) literacy practices of three Japanese American families residing in a predominantly Anglo and Latino community. Through interviews and observations, this study investigates Japanese children's HL-literacy practices, parental attitudes toward HL literacy, and challenges in HL-literacy development in…
Ek, Lucila D.
This article examines the transnationalism of a Pentecostal Guatemalan-American young woman who is a second-generation immigrant. Amalia traveled to Guatemala from when she was six months old until her sophomore year in college. These visits to Guatemala have helped her maintain her Guatemalan language, culture, and identity in the larger Southern…
Deutsch, Diana; Dooley, Kevin; Henthorn, Trevor; Head, Brian
Absolute pitch (AP), the ability to name a musical note in the absence of a reference note, is extremely rare in the U.S. and Europe, and its genesis is unclear. The prevalence of AP was examined among students in an American music conservatory as a function of age of onset of musical training, ethnicity, and fluency in speaking a tone language. Taking those of East Asian ethnicity, the performance level on a test of AP was significantly higher among those who spoke a tone language very fluently compared with those who spoke a tone language fairly fluently and also compared with those who were not fluent in speaking a tone language. The performance level of this last group did not differ significantly from that of Caucasian students who spoke only nontone language. Early onset of musical training was associated with enhanced performance, but this did not interact with the effect of language. Further analyses showed that the results could not be explained by country of early music education. The findings support the hypothesis that the acquisition of AP by tone language speakers involves the same process as occurs in the acquisition of a second tone language.
Liang, Wenchi; Wang, Judy; Chen, Mei-Yuh; Feng, Shibao; Yi, Bin; Mandelblatt, Jeanne S
Mammography screening rates among Chinese American women have been reported to be low. This study examines whether and how culture views and language ability influence mammography adherence in this mostly immigrant population. Asymptomatic Chinese American women (n = 466) aged 50 and older, recruited from the Washington, D.C. area, completed a telephone interview. Regular mammography was defined as having two mammograms at age-appropriate recommended intervals. Cultural views were assessed by 30 items, and language ability measured women's ability in reading, writing, speaking, and listening to English. After controlling for risk perception, worry, physician recommendation, family encouragement, and access barriers, women holding a more Chinese/Eastern cultural view were significantly less likely to have had regular mammograms than those having a Western cultural view. English ability was positively associated with mammography adherence. The authors' results imply that culturally sensitive and language-appropriate educational interventions are likely to improve mammography adherence in this population.
Alexander, C. J.; Angrum, A.; Martin, M.; Ali, N.; Kingfisher, J.; Treuer, A.; Grant, G.; Ciotti, J.
In the Western tradition, words and vocabulary encapsulate much of how knowledge enters the public discourse, and is passed from one generation to the next. Much of Native American knowledge is passed along in an oral tradition. Chants and ceremonies contain context and long-baseline data on the environment (geology, climate, and astronomy) that may even surpasses the lifespan of a single individual. For Native American students and researchers, the concept of ‘modern research and science education’ may be wrapped up into the conundrum of assimilation and loss of cultural identification and traditional way of life. That conundrum is also associated with the lack of language and vocabulary with which to discuss 'modern research.' Native Americans emphasize the need to know themselves and their own culture when teaching their students. Many Native American communities recognize that the retention of their language - and need to make the language relevant to the technological age we live in, represents one of their largest and most urgent challenges. One strategy for making science education relevant to Native American learners is identifying appropriate terms that cross the cultural divide. More than just words and vocabulary, the thought processes and word/concept relationships can be quite different in the native cultures. The U.S. Rosetta Project has worked to identify words associated with Western 'STEM' concepts in three Native American communities: Navajo, Hawaiian, and Ojibwe. The U.S. Rosetta Project is NASA’s contribution to the International Rosetta Mission. The Rosetta stone, inspiration for the mission’s name, is expected to provide the keys to the primordial solar system the way the original Rosetta Stone provided a key to ancient language. Steps taken so far include identification and presentation of online astronomy, geology, and physical science vocabulary terms in the native language, identification of teachers and classrooms - often in
Smith, Shana; Bellon-Harn, Monica L
The purpose of this exploratory study is to examine rates of auxiliary is and are across dialect patterns produced by African American English with specific language impairment (AAE-SLI) children following language treatment. The following research question is asked: Do AAE-SLI children exhibit rates of auxiliary is and are across dialect patterns consistent with previous reports of typically developing children and adult AAE speakers? A pre-/post-test design was used to identify patterns in which auxiliary is and are were produced at significant levels. Individual performance was included to examine variable rates of use across patterns. Group and individual results suggest children used auxiliary is and are in dialect patterns at rates consistent with typically developing child and adult AAE speakers. We conclude that rates of use may contribute to evidence-based guidelines for morphological intervention with AAE-SLI children.
Alexander Graham Bell is often portrayed as either hero or villain of deaf individuals and the Deaf community. His writings, however, indicate that he was neither, and was not as clearly definite in his beliefs about language as is often supposed. The following two articles, reprinted from The Educator (1898), Vol. V, pp. 3?4 and pp. 38?44,…
... Well-Being 9 - Thermometer Basics - Amarɨñña / አማርኛ (Amharic) MP3 Siloam Family Health Center Arabic (العربية) Expand Section ... Well-Being 9 - Thermometer Basics - myanma bhasa (Burmese) MP3 Siloam Family Health Center Dari (دری) Expand Section ...
Weimer, Amy A; Gasquoine, Philip G
Belief reasoning and emotion understanding were measured among 102 Mexican American bilingual children ranging from 4 to 7 years old. All children were tested in English and Spanish after ensuring minimum comprehension in each language. Belief reasoning was assessed using 2 false and 1 true belief tasks. Emotion understanding was measured using subtests from the Test for Emotion Comprehension. The influence of family background variables of yearly income, parental education level, and number of siblings on combined Spanish and English vocabulary, belief reasoning, and emotion understanding was assessed by regression analyses. Age and emotion understanding predicted belief reasoning. Vocabulary and belief reasoning predicted emotion understanding. When the sample was divided into language-dominant and balanced bilingual groups on the basis of language proficiency difference scores, there were no significant differences on belief reasoning or emotion understanding. Language groups were demographically similar with regard to child age, parental educational level, and family income. Results suggest Mexican American language-dominant and balanced bilinguals develop belief reasoning and emotion understanding similarly.
Examines the concept of linguistic legitimacy (and illegitimacy) using three specific cases--Black English, American Sign Language, and Esperanto. The paper argues that legitimacy is grounded more on personal, political, and ideological biases than on linguistic criteria. (SM)
Awad, George M.
This dissertation describes new techniques that can be used in a sign language recognition (SLR) system, and more generally in human gesture systems. Any SLR system consists of three main components: Skin detector, Tracker, and Recognizer. The skin detector is responsible for segmenting skin objects like the face and hands from video frames. The tracker keeps track of the hand location (more specifically the bounding box) and detects any occlusions that might happen between any skin objects. ...
This study is part of the European project Justisigns. The aim of this study is to ascertain experiences of Deaf people when contact was made with the justice system in Flanders. The literature review consists of explanation of the Conventions of United Nations (UN) and the European Union (EU) and the two Directives of the EU concerning the demand for interpreters in police interviews. Furthermore, an overview of the background in relation to accessibility, sign language and Deaf people in Fl...
Lewis, Cecil M; Tito, Raúl Y; Lizárraga, Beatriz; Stone, Anne C
Despite a long history of complex societies and despite extensive present-day linguistic and ethnic diversity, relatively few populations in Peru have been sampled for population genetic investigations. In order to address questions about the relationships between South American populations and about the extent of correlation between genetic distance, language, and geography in the region, mitochondrial DNA (mtDNA) hypervariable region I sequences and mtDNA haplogroup markers were examined in 33 individuals from the state of Ancash, Peru. These sequences were compared to those from 19 American Indian populations using diversity estimates, AMOVA tests, mismatch distributions, a multidimensional scaling plot, and regressions. The results show correlations between genetics, linguistics, and geographical affinities, with stronger correlations between genetics and language. Additionally, the results suggest a pattern of differential gene flow and drift in western vs. eastern South America, supporting previous mtDNA and Y chromosome investigations. (c) 2004 Wiley-Liss, Inc
Full Text Available We present a novel open-source 3D-printable dexterous anthropomorphic robotic hand specifically designed to reproduce Sign Languages’ hand poses for deaf and deaf-blind users. We improved the InMoov hand, enhancing dexterity by adding abduction/adduction degrees of freedom of three fingers (thumb, index and middle fingers and a three-degrees-of-freedom parallel spherical joint wrist. A systematic kinematic analysis is provided. The proposed robotic hand is validated in the framework of the PARLOMA project. PARLOMA aims at developing a telecommunication system for deaf-blind people, enabling remote transmission of signs from tactile Sign Languages. Both hardware and software are provided online to promote further improvements from the community.
Hou, Yang; Neff, Lisa A; Kim, Su Yeong
The current study examines the longitudinal indirect pathways linking language acculturation to marital quality. Three waves of data were collected from 416 Chinese American couples over eight years ( M age.wave1 = 48 for husbands, 44 for wives). Actor-partner interdependence model analyses revealed that for both husbands and wives, lower levels of language acculturation were associated with higher levels of stress over being stereotyped as a perpetual foreigner. Individuals' foreigner stress, in turn, was directly related to greater levels of their own and their partners' marital warmth, suggesting that foreigner stress may have some positive relational effects. However, individuals' foreigner stress also was associated with increases in their own depressive symptoms, which predicted higher levels of marital hostility in the partner. Overall, these results underscore the complexity of how language acculturation and foreigner stress relate to marital quality and the importance of considering the interdependence of the marital system.
Day, Linda; Sutton-Spence, Rachel
Research presented here describes the sign names and the customs of name allocation within the British Deaf community. While some aspects of British Sign Language sign names and British Deaf naming customs differ from those in most Western societies, there are many similarities. There are also similarities with other societies outside the more…
Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin
Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…
Garayeva, Almira K.; Akhmetzyanov, Ildar G.; Khismatullina, Lutsia G.
The importance of the topic of this study is determined by several factors: increased interest of linguists to the problem of interaction between language and culture; the need to study the onomastic units as body language. The purpose of this article is to identify the types of motivational nick names of famous American and English public…
Emmorey, Karen; Korpics, Franco; Petronio, Karen
The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf…
Quinto-Pozos, David; Forber-Pratt, Anjali J.; Singleton, Jenny L.
Purpose: This study focused on whether developmental communication disorders exist in American Sign Language (ASL) and how they might be characterized. ASL studies is an emerging field; educators and clinicians have minimal access to descriptions of communication disorders of the signed modality. Additionally, there are limited resources for…
Nur Muhammad Ardiansyah
Full Text Available This article describes the study of semantic in a specified domain of figurative language upon a selected work of American English literature, in form of short story written by the renowned writer and author, William Wymark Jacobs, entitled as ‘The Monkey’s Paw’. Several objectives are deduced by the researcher in quest of finding the forms of this figurative language within the passage. Briefly, figurative language itself is a feature of every languages, which emphasized the use of expression to symbolize a different meaning from the usual literal interpretation. In our analysis of ‘The Monkey’s Paw’, the varieties of figurative language: Metaphor, Personification, Hyperbole, Symbolism, also another terms used to represent unusual words construction or combination such as Onomatopoeia, Idiom, and even Imagery, are discussed in order in relation with true meaning discovery behind each figurative language properties.
Mills, Sarah D; Fox, Rina S; Malcarne, Vanessa L; Roesch, Scott C; Champagne, Brian R; Sadler, Georgia Robins
The Generalized Anxiety Disorder-7 scale (GAD-7) is a self-report questionnaire that is widely used to screen for anxiety. The GAD-7 has been translated into numerous languages, including Spanish. Previous studies evaluating the structural validity of the English and Spanish versions indicate a unidimensional factor structure in both languages. However, the psychometric properties of the Spanish language version have yet to be evaluated in samples outside of Spain, and the measure has not been tested for use among Hispanic Americans. This study evaluated the reliability, structural validity, and convergent validity of the English and Spanish language versions of the GAD-7 for Hispanic Americans in the United States. A community sample of 436 Hispanic Americans with an English (n = 210) or Spanish (n = 226) language preference completed the GAD-7. Multiple-group confirmatory factor analysis (CFA) was used to examine the goodness-of-fit of the unidimensional factor structure of the GAD-7 across language-preference groups. Results from the multiple-group CFA indicated a similar unidimensional factor structure with equivalent response patterns and item intercepts, but different variances, across language-preference groups. Internal consistency was good for both English and Spanish language-preference groups. The GAD-7 also evidenced good convergent validity as demonstrated by significant correlations in expected directions with the Perceived Stress Scale, the Patient Health Questionnaire-9, and the Physical Health domain of the World Health Organization Quality of Life-BREF assessment. The unidimensional GAD-7 is suitable for use among Hispanic Americans with an English or Spanish language preference.
Baik, Sharon H; Fox, Rina S; Mills, Sarah D; Roesch, Scott C; Sadler, Georgia Robins; Klonoff, Elizabeth A; Malcarne, Vanessa L
This study examined the psychometric properties of the Perceived Stress Scale-10 among 436 community-dwelling Hispanic Americans with English or Spanish language preference. Multigroup confirmatory factor analysis examined the factorial invariance of the Perceived Stress Scale-10 across language groups. Results supported a two-factor model (negative, positive) with equivalent response patterns and item intercepts but different factor covariances across languages. Internal consistency reliability of the Perceived Stress Scale-10 total and subscale scores was good in both language groups. Convergent validity was supported by expected relationships of Perceived Stress Scale-10 scores to measures of anxiety and depression. These results support the use of the Perceived Stress Scale-10 among Hispanic Americans.
The sandwich sign is demonstrated on cross-sectional imaging, commonly on CT or ultrasound. It refers to homogeneous soft- tissue masses representing mesenteric lymphadenopathy as the two halves of a sandwich bun, encasing the mesenteric fat and tubular mesenteric vessels that constitute the 'sandwich filling' (Figs ...