The translation of biblical texts into South African Sign Language. ... Native signers were used as translators with the assistance of hearing specialists in the fields of religion and translation studies. ... AJOL African Journals Online. HOW TO ...
Full Text Available , the fact that the target structure is SASL, the home language of the Deaf user, already facilitates the communication. Ul- timately the message will be delivered more naturally by a signing avatar . We shall present further scenarios for future... Work 6.1 Disambiguation Disambiguation can be improved on two levels: firstly, by eliciting more or better information from the user through the AAC interface and secondly, by improving certain as- pects of the MT system. We discuss both...
Silvana Aguiar dos Santos
Full Text Available http://dx.doi.org/10.5007/1984-8420.2015v16n2p101 This paper is the result of an initial attempt to establish a connection between Brazil and Mozambique regarding sign language translation and interpreting. It reviews some important landmarks in language policies aimed at sign languages in these countries and discusses how certain actions directly impact political decisions related to sign language translation and interpreting. In this context, two lines of argument are developed. The first one addresses the role of sign language translation and interpreting in the Portuguese-speaking context, since Portuguese is the official language in both countries; the other offers some reflections about the Deaf movements and the movements of sign language translators and interpreters, the legal recognition of sign languages, the development of undergraduate courses and the contemporary challenges in the work of translation professionals. Finally, it is suggested that sign language translators and interpreters in both Brazil and Mozambique undertake efforts to press government bodies to invest in: (i area-specific training for translators and interpreters, (ii qualification of the services provided by such professionals, and (iii development of human resources at master’s and doctoral levels in order to strengthen research on sign language translation and interpreting in the Community of Portuguese-Speaking Countries.
Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.
This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.
Luiz Daniel Rodrigues Dinarte
Full Text Available This article aims, based in sign language translation researches, and at the same time entering discussions with inspiration in contemporary theories on the concept of "deconstruction" (DERRIDA, 2004 DERRIDA e ROUDINESCO, 2004 ARROJO, 1993, to reflect on some aspects concerning to the definition of the role and duties of translators and interpreters. We conceive that deconstruction does not consist in a method to be applied on the linguistic and social phenomena, but a set of political strategies that comes from a speech community which translate texts, and thus put themselves in a translational task performing an act of reading that inserts sign language in the academic linguistic multiplicity.
Parton, Becky Sue
In recent years, research has progressed steadily in regard to the use of computers to recognize and render sign language. This paper reviews significant projects in the field beginning with finger-spelling hands such as "Ralph" (robotics), CyberGloves (virtual reality sensors to capture isolated and continuous signs), camera-based projects such as the CopyCat interactive American Sign Language game (computer vision), and sign recognition software (Hidden Markov Modeling and neural network systems). Avatars such as "Tessa" (Text and Sign Support Assistant; three-dimensional imaging) and spoken language to sign language translation systems such as Poland's project entitled "THETOS" (Text into Sign Language Automatic Translator, which operates in Polish; natural language processing) are addressed. The application of this research to education is also explored. The "ICICLE" (Interactive Computer Identification and Correction of Language Errors) project, for example, uses intelligent computer-aided instruction to build a tutorial system for deaf or hard-of-hearing children that analyzes their English writing and makes tailored lessons and recommendations. Finally, the article considers synthesized sign, which is being added to educational material and has the potential to be developed by students themselves.
The aim of this thesis is to describe Web accessibility in state administration in the Federal Republic of Germany in relation to the socio-demographic group of deaf sign language users who did not have the opportunity to gain proper knowledge of a written form of the German language. The demand of the Deaf to information in an accessible form as based on legal documents is presented in relation to the theory of translation. How translating from written texts into sign language works in pract...
Mean Foong, Oi; Low, Tang Jung; La, Wai Wan
The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.
Full Text Available This paper moves between two streets: Liverpool Road in the Sydney suburb of Ashfield and Via Sarpi in the Italian city of Milan. What connects these streets is that both have become important sites for businesses in the Chinese diaspora. Moreover, both are streets on which locals have expressed desires for Chinese signs to be translated into the national lingua franca. The paper argues that the cultural politics inherent in this demand for translation cannot be fully understood in the context of national debates about diversity and integration. It is also necessary to consider the emergence of the official Chinese Putonghua as global language, which competes with English but also colonizes dialects and minority languages. In the case of these dual language signs, the space between languages can neither be reduced to a contact zone of minority and majority cultures nor celebrated as a ‘third space’ where the power relations implied by such differences are subverted. At stake is rather a space characterised by what Naoki Sakai calls the schema of co-figuration, which allows the representation of translation as the passage between two equivalents that resemble each other and thus makes possible their determination as conceptually different and comparable. Drawing on arguments about translation and citizenship, the paper critically interrogates the ethos of interchangeability implied by this regime of translation. A closing argument is made for a vision of the common that implies neither civilisational harmony nor the translation of all values into a general equivalent. Primary sources include government reports, internet texts and media stories. These are analyzed using techniques of discourse analysis and interpreted with the help of secondary literature concerning globalisation, language and migration. The disciplinary matrix cuts and mixes between cultural studies, translation studies, citizenship studies, globalization studies and
Alex Giovanny Barreto
Full Text Available The article presents reflections on methodological translation-practice approach to sign language interpreter’s education focus in communicative competence. Implementing translation-practice approach experience started in several workshops of the Association of Translators and Interpreters of Sign Language of Colombia (ANISCOL and have now formalized in the bachelor in education degree project in signed languages, develop within Research Group UMBRAL from National Open University and Distance of Colombia-UNAD. The didactic proposal focus on the model of the efforts (Gile, specifically in the production and listen efforts. A criticism about translating competence is presented. Minifiction is literary genre with multiple semiotic and philosophical translation possibilities. These literary texts have elements with great potential to render on visual, gestural and spatial depictions of Colombian sign language which is profitable to interpreter training and education. Through El Dinosaurio sign language translation, we concludes with an outline and reflections on the pedagogical and didactic potential of minifiction and depictions in the design of training activities in sign language interpreters.
Full Text Available Una parte significativa de la población mexicana es sorda. Esta discapacidad restringe sus habilidades de interacción social con personas que no tienen dicha discapacidad y viceversa. En este artículo presentamos nuestros avances hacia el desarrollo de un traductor Voz-a-Lenguaje-de-Señas del español mexicano para asistir a personas sin discapacidad a interactuarcon personas sordas. La metodología de diseño propuesta considera limitados recursos para(1 el desarrollo del Reconocedor Automático del Habla (RAH mexicano, el cual es el módulo principal del traductor, y (2 el vocabulario del Lenguaje de Señas Mexicano (LSM disponible para representar las oraciones reconocidas. La traducción Voz-a-Lenguaje-de-Señas fue lograda con un nivel de precisión mayor al 97% para usuarios de prueba diferentes de aquellos seleccionados para el entrenamiento del RAH.A significant population of Mexican people are deaf. This disorder restricts their social interac-tion skills with people who don't have such disorder and viceversa. In this paper we presentour advances towards the development of a Mexican Speech-to-Sign-Language translator toassist normal people to interact with deaf people. The proposed design methodology considerslimited resources for (1 the development of the Mexican Automatic Speech Recogniser (ASRsystem, which is the main module in the translator, and (2 the Mexican Sign Language(MSL vocabulary available to represent the decoded speech. Speech-to-MSL translation wasaccomplished with an accuracy level over 97% for test speakers different from those selectedfor ASR training.
Ramesh Mahadev kagalkar
Full Text Available In the world of signing and gestures, lots of analysis work has been done over the past three decades. This has led to a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a restricted vocabulary. In gift state of affairs, human machine interactive systems facilitate communication between the deaf, and hearing impaired in universe things. So as to boost the accuracy of recognition, several researchers have deployed strategies like HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The most purpose of this paper is to investigate these strategies and to effectively compare them, which can alter the reader to succeed in associate in nursing optimum resolution. This creates each, challenges and opportunities for signing recognition connected analysis. Normal 0 false false false DE JA X-NONE
Vanessa Regina de Oliveira Martins
Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.
Vanessa Regina de Oliveira Martins
Full Text Available This paper aims to discuss the new profile of sign language translators/interpreters that is taking shape in Brazil since the implementation of policies stimulating the training of these professionals. We qualitatively analyzed answers to a semi-open questionary given by undergraduate students from a BA course in translation and interpretation in Brazilian sign language/Portuguese. Our results show that the ones to seek for this area are not, as it used to be, the ones who have some relation with the deaf community and/or need some kind of certification for their activity as a sign language interpreter. Actually, the students’ choice for the course in discussion had to do with their score in a unified profession selection system (SISU. This contrasts with the 1980, 1990, 2000 sign language interpreter’s profile. As Brazilian Sign Language has become more popular, people search for a university degree have started to see sign language translation/interpreting as an interesting option for their career. So, we discuss here the need to take into account the need to provide students who cannot sign with the necessary pedagogical means to learn the language, which will promote the accessibility of Brazilian deaf communities.
Hiddinga, A.; Crasborn, O.
Deaf people who form part of a Deaf community communicate using a shared sign language. When meeting people from another language community, they can fall back on a flexible and highly context-dependent form of communication called international sign, in which shared elements from their own sign
Over the years attempts have been made to standardize sign languages. This form of language planning has been tackled by a variety of agents, most notably teachers of Deaf students, social workers, government agencies, and occasionally groups of Deaf people themselves. Their efforts have most often involved the development of sign language books…
Bakken Jepsen, Julie
in spoken languages, where a person working as a blacksmith by his friends might be referred to as ‘The Blacksmith’ (‘Here comes the Blacksmith!’) instead of using the person’s first name. Name signs are found not only in Danish Sign Language (DSL) but in most, if not all, sign languages studied to date....... This article provides examples of the creativity of the users of Danish Sign Language, including some of the processes in the use of metaphors, visual motivation and influence from Danish when name signs are created.......A name sign is a personal sign assigned to deaf, hearing impaired and hearing persons who enter the deaf community. The mouth action accompanying the sign reproduces all or part of the formal first name that the person has received by baptism or naming. Name signs can be compared to nicknames...
Pfau, R.; Steinbach, M.; Woll, B.
Sign language linguists show here that all the questions relevant to the linguistic investigation of spoken languages can be asked about sign languages. Conversely, questions that sign language linguists consider - even if spoken language researchers have not asked them yet - should also be asked of
Fels, Deborah I.; Richards, Jan; Hardman, Jim; Lee, Daniel G.
The World Wide Web has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The…
Van Herreweghe, Mieke; Vermeerbergen, Myriam
In 1997, the Flemish Deaf community officially rejected standardisation of Flemish Sign Language. It was a bold choice, which at the time was not in line with some of the decisions taken in the neighbouring countries. In this article, we shall discuss the choices the Flemish Deaf community has made in this respect and explore why the Flemish Deaf…
Journal of Fundamental and Applied Sciences. Journal Home · ABOUT ... SL recognition system based on the Malaysian Sign Language (MSL). Implementation results are described. Keywords: sign language; pattern classification; database.
de Vos, C.; Pfau, R.
Since the 1990s, the field of sign language typology has shown that sign languages exhibit typological variation at all relevant levels of linguistic description. These initial typological comparisons were heavily skewed toward the urban sign languages of developed countries, mostly in the Western
Rodríguez Ortiz, I R
This study aims to answer the question, how much of Spanish Sign Language interpreting deaf individuals really understand. Study sampling included 36 deaf people (deafness ranging from severe to profound; variety depending on the age at which they learned sign language) and 36 hearing people who had good knowledge of sign language (most were interpreters). Sign language comprehension was assessed using passages of secondary level. After being exposed to the passages, the participants had to tell what they had understood about them, answer a set of related questions, and offer a title for the passage. Sign language comprehension by deaf participants was quite acceptable but not as good as that by hearing signers who, unlike deaf participants, were not only late learners of sign language as a second language but had also learned it through formal training.
Ten Holt, G.A.; Arendsen, J.; De Ridder, H.; Van Doorn, A.J.; Reinders, M.J.T.; Hendriks, E.A.
Current automatic sign language recognition (ASLR) seldom uses perceptual knowledge about the recognition of sign language. Using such knowledge can improve ASLR because it can give an indication which elements or phases of a sign are important for its meaning. Also, the current generation of
Schuit, J.; Baker, A.; Pfau, R.
Sign language typology is a fairly new research field and typological classifications have yet to be established. For spoken languages, these classifications are generally based on typological parameters; it would thus be desirable to establish these for sign languages. In this paper, different
In light of the absence of a codified standard variety in British Sign Language and German Sign Language ("Deutsche Gebardensprache") there have been repeated calls for the standardization of both languages primarily from outside the Deaf community. The paper is based on a recent grounded theory study which explored perspectives on sign…
Adam Schembri; Jordan Fenlon; Kearsy Cormier; Trevor Johnston
This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflec...
Information and research on Mongolian Sign Language is scant. To date, only one dictionary is available in the United States (Badnaa and Boll 1995), and even that dictionary presents only a subset of the signs employed in Mongolia. The present study describes the kinship system used in Mongolian Sign Language (MSL) based on data elicited from…
Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor
This paper examines the possible relationship between proposed social determinants of morphological 'complexity' and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological 'complexification'), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.
Schembri, Adam; Fenlon, Jordan; Cormier, Kearsy; Johnston, Trevor
This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011), applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’), the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005); in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored. PMID:29515506
Full Text Available This paper examines the possible relationship between proposed social determinants of morphological ‘complexity’ and how this contributes to linguistic diversity, specifically via the typological nature of the sign languages of deaf communities. We sketch how the notion of morphological complexity, as defined by Trudgill (2011, applies to sign languages. Using these criteria, sign languages appear to be languages with low to moderate levels of morphological complexity. This may partly reflect the influence of key social characteristics of communities on the typological nature of languages. Although many deaf communities are relatively small and may involve dense social networks (both social characteristics that Trudgill claimed may lend themselves to morphological ‘complexification’, the picture is complicated by the highly variable nature of the sign language acquisition for most deaf people, and the ongoing contact between native signers, hearing non-native signers, and those deaf individuals who only acquire sign languages in later childhood and early adulthood. These are all factors that may work against the emergence of morphological complexification. The relationship between linguistic typology and these key social factors may lead to a better understanding of the nature of sign language grammar. This perspective stands in contrast to other work where sign languages are sometimes presented as having complex morphology despite being young languages (e.g., Aronoff et al., 2005; in some descriptions, the social determinants of morphological complexity have not received much attention, nor has the notion of complexity itself been specifically explored.
Zwitserlood, Inge; Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
ge lexicography has thus far been a relatively obscure area in the world of lexicography. Therefore, this article will contain background information on signed languages and the communities in which they are used, on the lexicography of sign languages, the situation in the Netherlands as well...
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
The entries of the The Danish Sign Language Dictionary have four sections: Entry header: In this section the sign headword is shown as a photo and a gloss. The first occurring location and handshape of the sign are shown as icons. Video window: By default the base form of the sign headword...... forms of the sign (only for classifier entries). In addition to this, frequent co-occurrences with the sign are shown in this section. The signs in the The Danish Sign Language Dictionary can be looked up through: Handshape: Particular handshapes for the active and the passive hand can be specified...... to find signs that are not themselves lemmas in the dictionary, but appear in example sentences. Topic: Topics can be chosen as search criteria from a list of 70 topics....
In the field of Bible translation, modern translations of the Bible into Arabic have not received the scholarly attention they deserve. The attention accorded to pre-modern translations of the Bible into Arabic and its various language varieties that have been produced by Arabic-speaking Jews...... Translation New World translation of the Holy Scriptures, and address how this messianic evangelical Christian group re-constructs the relationship of religion –universality of one truth and its embodiment in one community of faith – and translation of the Bible as a sign of the last days in harmony...
Kimmelman, V.; Paperno, D.; Keenan, E.L.
After presenting some basic genetic, historical and typological information about Russian Sign Language, this chapter outlines the quantification patterns it expresses. It illustrates various semantic types of quantifiers, such as generalized existential, generalized universal, proportional,
... combined with facial expressions and postures of the body. It is the primary language of many North Americans who are deaf and ... their eyebrows, widening their eyes, and tilting their bodies forward. Just as with other languages, specific ways of expressing ideas in ASL vary ...
This handbook provides information on some 38 sign languages, including basic facts about each of the languages, structural aspects, history and culture of the Deaf communities, and history of research. The papers are all original, and each has been specifically written for the volume by an expert...
El Ghoul, Oussama; Jemni, Mohamed
Screen reader technology has appeared first to allow blind and people with reading difficulties to use computer and to access to the digital information. Until now, this technology is exploited mainly to help blind community. During our work with deaf people, we noticed that a screen reader can facilitate the manipulation of computers and the reading of textual information. In this paper, we propose a novel screen reader dedicated to deaf. The output of the reader is a visual translation of the text to sign language. The screen reader is composed by two essential modules: the first one is designed to capture the activities of users (mouse and keyboard events). For this purpose, we adopted Microsoft MSAA application programming interfaces. The second module, which is in classical screen readers a text to speech engine (TTS), is replaced by a novel text to sign (TTSign) engine. This module converts text into sign language animation based on avatar technology.
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
As we began working on the Danish Sign Language (DTS) Dictionary, we soon realised the truth in the statement that a lexicographer has to deal with problems within almost any linguistic discipline. Most of these problems come down to establishing simple rules, rules that can easily be applied every...... – or are they homonyms?" and so on. Very often such questions demand further research and can't be answered sufficiently through a simple standard formula. Therefore lexicographic work often seems like an endless series of compromises. Another source of compromise arises when you set out to decide which information...... this dilemma, as we see DTS learners and teachers as well as native DTS signers as our target users. In the following we will focus on four problem areas with particular relevance for the sign language lexicographer: Sign representation Spoken languague equivalents and mouth movements Example sentences Partial...
This article explores the morphological process of numeral incorporation in Japanese Sign Language. Numeral incorporation is defined and the available research on numeral incorporation in signed language is discussed. The numeral signs in Japanese Sign Language are then introduced and followed by an explanation of the numeral morphemes which are…
De Meulder, Maartje
This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…
This article discusses Estonian personal name signs. According to study there are four personal name sign categories in Estonian Sign Language: (1) arbitrary name signs; (2) descriptive name signs; (3) initialized-descriptive name signs; (4) loan/borrowed name signs. Mostly there are represented descriptive and borrowed personal name signs among…
Schmaling, Constanze H.
This article gives an overview of dictionaries of African sign languages that have been published to date most of which have not been widely distributed. After an introduction into the field of sign language lexicography and a discussion of some of the obstacles that authors of sign language dictionaries face in general, I will show problems…
Kaneko, Michiko; Mesch, Johanna
This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…
Caselli, Naomi K; Sehyr, Zed Sevcikova; Cohen-Goldberg, Ariel M; Emmorey, Karen
ASL-LEX is a lexical database that catalogues information about nearly 1,000 signs in American Sign Language (ASL). It includes the following information: subjective frequency ratings from 25-31 deaf signers, iconicity ratings from 21-37 hearing non-signers, videoclip duration, sign length (onset and offset), grammatical class, and whether the sign is initialized, a fingerspelled loan sign, or a compound. Information about English translations is available for a subset of signs (e.g., alternate translations, translation consistency). In addition, phonological properties (sign type, selected fingers, flexion, major and minor location, and movement) were coded and used to generate sub-lexical frequency and neighborhood density estimates. ASL-LEX is intended for use by researchers, educators, and students who are interested in the properties of the ASL lexicon. An interactive website where the database can be browsed and downloaded is available at http://asl-lex.org .
Italian Sign Language (LIS) is the name of the language used by the Italian Deaf community. The acronym LIS derives from Lingua italiana dei segni ("Italian language of signs"), although nowadays Italians refers to LIS as Lingua dei segni italiana, reflecting the more appropriate phrasing "Italian sign language." Historically,…
Smith, Cynthia; Morgan, Robert L.
There have been increasing incidents of innocent people who use American Sign Language (ASL) or another form of sign language being victimized by gang violence due to misinterpretation of ASL hand formations. ASL is familiar to learners with a variety of disabilities, particularly those in the deaf community. The problem is that gang members have…
Ten Holt, G.A.
Automatic sign language recognition is a relatively new field of research (since ca. 1990). Its objectives are to automatically analyze sign language utterances. There are several issues within the research area that merit investigation: how to capture the utterances (cameras, magnetic sensors,
In this thesis, the native sign language used by deaf Inuit people is described. Inuit Sign Language (IUR) is used by less than 40 people as their sole means of communication, and is therefore highly endangered. Apart from the description of IUR as such, an additional goal is to contribute to the
The paper describes the background, subjects, assumptions, procedure, and preliminary results of a small-scale experimental study of L2 translation (Danish into English) and picture verbalization in L2 (English)....
Tyrone, Martha E; Mauk, Claude E
This study examines sign lowering as a form of phonetic reduction in American Sign Language. Phonetic reduction occurs in the course of normal language production, when instead of producing a carefully articulated form of a word, the language user produces a less clearly articulated form. When signs are produced in context by native signers, they often differ from the citation forms of signs. In some cases, phonetic reduction is manifested as a sign being produced at a lower location than in the citation form. Sign lowering has been documented previously, but this is the first study to examine it in phonetic detail. The data presented here are tokens of the sign WONDER, as produced by six native signers, in two phonetic contexts and at three signing rates, which were captured by optoelectronic motion capture. The results indicate that sign lowering occurred for all signers, according to the factors we manipulated. Sign production was affected by several phonetic factors that also influence speech production, namely, production rate, phonetic context, and position within an utterance. In addition, we have discovered interesting variations in sign production, which could underlie distinctions in signing style, analogous to accent or voice quality in speech.
This dissertation explores Information Structure in two sign languages: Sign Language of the Netherlands and Russian Sign Language. Based on corpus data and elicitation tasks we show how topic and focus are expressed in these languages. In particular, we show that topics can be marked syntactically
Kristoffersen, Jette Hedegaard; Troelsgård, Thomas
Compiling sign language dictionaries has in the last 15 years changed from most often being simply collecting and presenting signs for a given gloss in the surrounding vocal language to being a complicated lexicographic task including all parts of linguistic analysis, i.e. phonology, phonetics......, morphology, syntax and semantics. In this presentation we will give a short overview of the Danish Sign Language dictionary project. We will further focus on lemma selection and some of the problems connected with lemmatisation....
Full Text Available This paper shows a method of teaching written language to deaf people using sign language as the language of instruction. Written texts in the target language are combined with sign language videos which provide the users with various modes of translation (words/phrases/sentences. As examples, two EU projects for English for the Deaf are presented which feature English texts and translations into the national sign languages of all the partner countries plus signed grammar explanations and interactive exercises. Both courses are web-based; the programs may be accessed free of charge via the respective homepages (without any download or log-in.
Brookshire, Geoffrey; Lu, Jenny; Nusbaum, Howard C; Goldin-Meadow, Susan; Casasanto, Daniel
Despite immense variability across languages, people can learn to understand any human language, spoken or signed. What neural mechanisms allow people to comprehend language across sensory modalities? When people listen to speech, electrophysiological oscillations in auditory cortex entrain to slow ([Formula: see text]8 Hz) fluctuations in the acoustic envelope. Entrainment to the speech envelope may reflect mechanisms specialized for auditory perception. Alternatively, flexible entrainment may be a general-purpose cortical mechanism that optimizes sensitivity to rhythmic information regardless of modality. Here, we test these proposals by examining cortical coherence to visual information in sign language. First, we develop a metric to quantify visual change over time. We find quasiperiodic fluctuations in sign language, characterized by lower frequencies than fluctuations in speech. Next, we test for entrainment of neural oscillations to visual change in sign language, using electroencephalography (EEG) in fluent speakers of American Sign Language (ASL) as they watch videos in ASL. We find significant cortical entrainment to visual oscillations in sign language sign is strongest over occipital and parietal cortex, in contrast to speech, where coherence is strongest over the auditory cortex. Nonsigners also show coherence to sign language, but entrainment at frontal sites is reduced relative to fluent signers. These results demonstrate that flexible cortical entrainment to language does not depend on neural processes that are specific to auditory speech perception. Low-frequency oscillatory entrainment may reflect a general cortical mechanism that maximizes sensitivity to informational peaks in time-varying signals.
Wang, Jihong; Napier, Jemina
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an…
In early May, CERN welcomed a group of deaf children for a tour of Microcosm and a Fun with Physics demonstration. On 4 May, around ten children from the Centre pour enfants sourds de Montbrillant (Montbrillant Centre for Deaf Children), a public school funded by the Office médico-pédagogique du canton de Genève, took a guided tour of the Microcosm exhibition and were treated to a Fun with Physics demonstration. The tour guides’ explanations were interpreted into sign language in real time by a professional interpreter who accompanied the children, and the pace and content were adapted to maximise the interaction with the children. This visit demonstrates CERN’s commitment to remaining as widely accessible as possible. To this end, most of CERN’s visit sites offer reduced-mobility access. In the past few months, CERN has also welcomed children suffering from xeroderma pigmentosum (a genetic disorder causing extreme sensiti...
Sze, Felix; Lo, Connie; Lo, Lisa; Chu, Kenny
This article traces the origins of Hong Kong Sign Language (hereafter HKSL) and its subsequent development in relation to the establishment of Deaf education in Hong Kong after World War II. We begin with a detailed description of the history of Deaf education with a particular focus on the role of sign language in such development. We then…
Harris, Raychelle; Holmes, Heidi M.; Mertens, Donna M.
Codes of ethics exist for most professional associations whose members do research on, for, or with sign language communities. However, these ethical codes are silent regarding the need to frame research ethics from a cultural standpoint, an issue of particular salience for sign language communities. Scholars who write from the perspective of…
Corina, David P.; Hafer, Sarah; Welch, Kearnan
This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers…
Hildebrandt, Ursula; Corina, David
Investigates deaf and hearing subjects' ratings of American Sign Language (ASL) signs to assess whether linguistic experience shapes judgments of sign similarity. Findings are consistent with linguistic theories that posit movement and location as core structural elements of syllable structure in ASL. (Author/VWL)
Translation, Cultural Adaptation, and Validation of Leeds Assessment of Neuropathic Symptoms and Signs (LANSS) and Self-Complete Leeds Assessment of Neuropathic Symptoms and Signs (S-LANSS) Questionnaires into the Greek Language.
Batistaki, Chrysanthi; Lyrakos, George; Drachtidi, Kalliopi; Stamatiou, Georgia; Kitsou, Maria-Chrysanthi; Kostopanagiotou, Georgia
The LANSS and S-LANSS questionnaires represent two widely accepted and validated instruments used to assist the identification of neuropathic pain worldwide. The aim of this study was to translate, culturally adapt, and validate the LANSS and S-LANSS questionnaires into the Greek language. Forward and backward translations of both questionnaires were performed from the English to Greek language. The final versions were assessed by a committee of clinical experts, and they were then pilot-tested in 20 patients with chronic pain. Both questionnaires were validated in 200 patients with chronic pain (100 patients for each questionnaire), using as the "gold standard" the diagnosis of a clinical expert in pain management. Sensitivity and specificity of questionnaires were assessed, as well as the internal consistency (using Cronbach's alpha coefficient) and correlation with the "gold standard" diagnosis (using Pearson correlation coefficient). Sensitivity and specificity of the LANSS questionnaire were calculated to be 82.76% and 95.24%, while for the S-LANSS 86.21% and 95.24%, respectively. Positive predictive value for neuropathic pain was 96% for the LANSS and 96.15% for the S-LANSS. Cronbach's alpha was revealed to be acceptable for both questionnaires (0.65 for LANSS and 0.67 for the S-LANSS), while a significant correlation was observed compared to the "gold standard" diagnosis (rLANSS = 0.79 και tSLANSS = 0.77, respectively, P = 0.01). The LANSS and the S-LANSS diagnostic tools have been translated and validated into the Greek language and can be adequately used to assist the identification of neuropathic pain in everyday clinical practice. © 2015 World Institute of Pain.
Gutierrez-Sigut, Eva; Costello, Brendan; Baus, Cristina; Carreiras, Manuel
The LSE-Sign database is a free online tool for selecting Spanish Sign Language stimulus materials to be used in experiments. It contains 2,400 individual signs taken from a recent standardized LSE dictionary, and a further 2,700 related nonsigns. Each entry is coded for a wide range of grammatical, phonological, and articulatory information, including handshape, location, movement, and non-manual elements. The database is accessible via a graphically based search facility which is highly flexible both in terms of the search options available and the way the results are displayed. LSE-Sign is available at the following website: http://www.bcbl.eu/databases/lse/.
Williams, Joshua T.; Newman, Sharlene D.
A large body of literature has characterized unimodal monolingual and bilingual lexicons and how neighborhood density affects lexical access; however there have been relatively fewer studies that generalize these findings to bimodal (M2) second language (L2) learners of sign languages. The goal of the current study was to investigate parallel…
Goldin-Meadow, Susan; Brentari, Diane
How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.
Carreres, Angeles; Noriega-Sanchez, Maria
The past three decades have seen vast changes in attitudes towards translation, both as an academic discipline and as a profession. The insights we have gained in recent years, in particular in the area of professional translator training, call for a reassessment of the role of translation in language teaching. Drawing on research and practices in…
Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik
Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnos...
Hall, Matthew L; Ferreira, Victor S; Mayberry, Rachel I
Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL). Experiment 1 shows that second language (L2) signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect). Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming) operates similarly in sign and speech.
Matthew L Hall
Full Text Available Psycholinguistic studies of sign language processing provide valuable opportunities to assess whether language phenomena, which are primarily studied in spoken language, are fundamentally shaped by peripheral biology. For example, we know that when given a choice between two syntactically permissible ways to express the same proposition, speakers tend to choose structures that were recently used, a phenomenon known as syntactic priming. Here, we report two experiments testing syntactic priming of a noun phrase construction in American Sign Language (ASL. Experiment 1 shows that second language (L2 signers with normal hearing exhibit syntactic priming in ASL and that priming is stronger when the head noun is repeated between prime and target (the lexical boost effect. Experiment 2 shows that syntactic priming is equally strong among deaf native L1 signers, deaf late L1 learners, and hearing L2 signers. Experiment 2 also tested for, but did not find evidence of, phonological or semantic boosts to syntactic priming in ASL. These results show that despite the profound differences between spoken and signed languages in terms of how they are produced and perceived, the psychological representation of sentence structure (as assessed by syntactic priming operates similarly in sign and speech.
Mann, Wolfgang; Roy, Penny; Morgan, Gary
This study describes the adaptation process of a vocabulary knowledge test for British Sign Language (BSL) into American Sign Language (ASL) and presents results from the first round of pilot testing with 20 deaf native ASL signers. The web-based test assesses the strength of deaf children's vocabulary knowledge by means of different mappings of…
There is a current need for reliable and valid test instruments in different countries in order to monitor deaf children's sign language acquisition. However, very few tests are commercially available that offer strong evidence for their psychometric properties. A German Sign Language (DGS) test focusing on linguistic structures that are acquired…
Hernández, Cesar; Pulido, Jose L; Arias, Jorge E
To develop a technological tool that improves the initial learning of sign language in hearing impaired children. The development of this research was conducted in three phases: the lifting of requirements, design and development of the proposed device, and validation and evaluation device. Through the use of information technology and with the advice of special education professionals, we were able to develop an electronic device that facilitates the learning of sign language in deaf children. This is formed mainly by a graphic touch screen, a voice synthesizer, and a voice recognition system. Validation was performed with the deaf children in the Filadelfia School of the city of Bogotá. A learning methodology was established that improves learning times through a small, portable, lightweight, and educational technological prototype. Tests showed the effectiveness of this prototype, achieving a 32 % reduction in the initial learning time for sign language in deaf children.
Notarrigo, Ingrid; Meurant, Laurence; Van Herreweghe, Mieke; Vermeerbergen, Myriam
Repetition was described in the nineties by a limited number of sign linguists: Vermeerbergen & De Vriendt (1994) looked at a small corpus of VGT data, Fisher & Janis (1990) analysed “verb sandwiches” in ASL and Pinsonneault (1994) “verb echos” in Quebec Sign Language. More recently the same phenomenon has been the focus of research in a growing number of signed languages, including American (Nunes and de Quadros 2008), Hong Kong (Sze 2008), Russian (Shamaro 2008), Polish (Flilipczak and Most...
Brentari, Diane; Coppola, Marie
How do languages emerge? What are the necessary ingredients and circumstances that permit new languages to form? Various researchers within the disciplines of primatology, anthropology, psychology, and linguistics have offered different answers to this question depending on their perspective. Language acquisition, language evolution, primate communication, and the study of spoken varieties of pidgin and creoles address these issues, but in this article we describe a relatively new and important area that contributes to our understanding of language creation and emergence. Three types of communication systems that use the hands and body to communicate will be the focus of this article: gesture, homesign systems, and sign languages. The focus of this article is to explain why mapping the path from gesture to homesign to sign language has become an important research topic for understanding language emergence, not only for the field of sign languages, but also for language in general. WIREs Cogn Sci 2013, 4:201-211. doi: 10.1002/wcs.1212 For further resources related to this article, please visit the WIREs website. Copyright © 2012 John Wiley & Sons, Ltd.
Behares, Luis Ernesto; Brovetto, Claudia; Crespi, Leonardo Peluso
In the first part of this article the authors consider the policies that apply to Uruguayan Sign Language (Lengua de Senas Uruguaya; hereafter LSU) and the Uruguayan Deaf community within the general framework of language policies in Uruguay. By analyzing them succinctly and as a whole, the authors then explain twenty-first-century innovations.…
Meara, Rhian; Cameron, Audrey; Quinn, Gary; O'Neill, Rachel
The BSL Glossary Project, run by the Scottish Sensory Centre at the University of Edinburgh focuses on developing scientific terminology in British Sign Language for use in the primary, secondary and tertiary education of deaf and hard of hearing students within the UK. Thus far, the project has developed 850 new signs and definitions covering Chemistry, Physics, Biology, Astronomy and Mathematics. The project has also translated examinations into BSL for students across Scotland. The current phase of the project has focused on developing terminology for Geography and Geology subjects. More than 189 new signs have been developed in these subjects including weather, rivers, maps, natural hazards and Geographical Information Systems. The signs were developed by a focus group with expertise in Geography and Geology, Chemistry, Ecology, BSL Linguistics and Deaf Education all of whom are deaf fluent BSL users.
MacSweeney, Mairéad; Woll, Bencie; Campbell, Ruth; Calvert, Gemma A; McGuire, Philip K; David, Anthony S; Simmons, Andrew; Brammer, Michael J
In all signed languages used by deaf people, signs are executed in "sign space" in front of the body. Some signed sentences use this space to map detailed "real-world" spatial relationships directly. Such sentences can be considered to exploit sign space "topographically." Using functional magnetic resonance imaging, we explored the extent to which increasing the topographic processing demands of signed sentences was reflected in the differential recruitment of brain regions in deaf and hearing native signers of the British Sign Language. When BSL signers performed a sentence anomaly judgement task, the occipito-temporal junction was activated bilaterally to a greater extent for topographic than nontopographic processing. The differential role of movement in the processing of the two sentence types may account for this finding. In addition, enhanced activation was observed in the left inferior and superior parietal lobules during processing of topographic BSL sentences. We argue that the left parietal lobe is specifically involved in processing the precise configuration and location of hands in space to represent objects, agents, and actions. Importantly, no differences in these regions were observed when hearing people heard and saw English translations of these sentences. Despite the high degree of similarity in the neural systems underlying signed and spoken languages, exploring the linguistic features which are unique to each of these broadens our understanding of the systems involved in language comprehension.
This article gives a first overview of the sign language situation in Mali and its capital, Bamako, located in the West African Sahel. Mali is a highly multilingual country with a significant incidence of deafness, for which meningitis appears to be the main cause, coupled with limited access to adequate health care. In comparison to neighboring…
In this paper the results of an investigation of word order in Russian Sign Language (RSL) are presented. A small corpus of narratives based on comic strips by nine native signers was analyzed and a picture-description experiment (based on Volterra et al. 1984) was conducted with six native signers. The results are the following: the most frequent…
Kenyan Sign Language (KSL) is a visual gestural language used by members of the deaf community in Kenya. Kiswahili on the other hand is a Bantu language that is used as the national language of Kenya. The two are world's apart, one being a spoken language and the other a signed language and thus their “… basic ...
Orfanidou, E.; McQueen, J.; Adam, R.; Morgan, G.
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were prec...
Zaharia, Titus; Preda, Marius; Preteux, Francoise J.
In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.
Is the right to sign language only the right to a minority language? Holding a capability (not a disability) approach, and building on the psycholinguistic literature on sign language acquisition, I make the point that this right is of a stronger nature, since only sign languages can guarantee that each deaf child will properly develop the…
Reeves, J B; Newell, W; Holcomb, B R; Stinson, M
In collaboration with teachers and students at the National Technical Institute for the Deaf (NTID), the Sign Language Skills Classroom Observation (SLSCO) was designed to provide feedback to teachers on their sign language communication skills in the classroom. In the present article, the impetus and rationale for development of the SLSCO is discussed. Previous studies related to classroom signing and observation methodology are reviewed. The procedure for developing the SLSCO is then described. This procedure included (a) interviews with faculty and students at NTID, (b) identification of linguistic features of sign language important for conveying content to deaf students, (c) development of forms for recording observations of classroom signing, (d) analysis of use of the forms, (e) development of a protocol for conducting the SLSCO, and (f) piloting of the SLSCO in classrooms. The results of use of the SLSCO with NTID faculty during a trial year are summarized.
Full Text Available of work made in SASL. There is currently no collection of the cultural and linguistic heritage of SASL. Public signage and localisation: Provision for SASL-specifi c sign names of places, people, companies and brands, as well as the localisation... upgrading the aging data and voice infrastructures for visual grade technologies, new usages of technologies will emerge in public signage and communications, in advertising and for visual languages such as SASL. Research and development in Sign Language...
Nonaka, Angela M
Communication obstacles in health care settings adversely impact patient-practitioner interactions by impeding service efficiency, reducing mutual trust and satisfaction, or even endangering health outcomes. When interlocutors are separated by language, interpreters are required. The efficacy of interpreting, however, is constrained not just by interpreters' competence but also by health care providers' facility working with interpreters. Deaf individuals whose preferred form of communication is a signed language often encounter communicative barriers in health care settings. In those environments, signing Deaf people are entitled to equal communicative access via sign language interpreting services according to the Americans with Disabilities Act and Executive Order 13166, the Limited English Proficiency Initiative. Yet, litigation in states across the United States suggests that individual and institutional providers remain uncertain about their legal obligations to provide equal communicative access. This article discusses the legal and ethical imperatives for using professionally certified (vs. ad hoc) sign language interpreters in health care settings. First outlining the legal terrain governing provision of sign language interpreting services, the article then describes different types of "sign language" (e.g., American Sign Language vs. manually coded English) and different forms of "sign language interpreting" (e.g., interpretation vs. transliteration vs. translation; simultaneous vs. consecutive interpreting; individual vs. team interpreting). This is followed by reviews of the formal credentialing process and of specialized forms of sign language interpreting-that is, certified deaf interpreting, trilingual interpreting, and court interpreting. After discussing practical steps for contracting professional sign language interpreters and addressing ethical issues of confidentiality, this article concludes by offering suggestions for working more effectively
Full Text Available In the communication of deaf people between themselves and hearing people there are three basic aspects of interaction: gesture, finger signs and writing. The gesture is a conditionally agreed manner of communication with the help of the hands followed by face and body mimic. The gesture and the movements pre-exist the speech and they had the purpose to mark something, and later to emphasize the speech expression.Stokoe was the first linguist that realised that the signs are not a whole that can not be analysed. He analysed signs in insignificant parts that he called “chemeres”, and many linguists today call them phonemes. He created three main phoneme categories: hand position, location and movement.Sign languages as spoken languages have background from the distant past. They developed parallel with the development of spoken language and undertook many historical changes. Therefore, today they do not represent a replacement of the spoken language, but are languages themselves in the real sense of the word.Although the structures of the English language used in USA and in Great Britain is the same, still their sign languages-ASL and BSL are different.
Full Text Available – Sign language plays a great role as communication media for people with hearing difficulties.In developed countries, systems are made for overcoming a problem in communication with deaf people. This encouraged us to develop a system for the Bosnian sign language since there is a need for such system. The work is done with the use of digital image processing methods providing a system that teaches a multilayer neural network using a back propagation algorithm. Images are processed by feature extraction methods, and by masking method the data set has been created. Training is done using cross validation method for better performance thus; an accuracy of 84% is achieved.
Kocab, Annemarie; Senghas, Ann; Snedeker, Jesse
Understanding what uniquely human properties account for the creation and transmission of language has been a central goal of cognitive science. Recently, the study of emerging sign languages, such as Nicaraguan Sign Language (NSL), has offered the opportunity to better understand how languages are created and the roles of the individual learner and the community of users. Here, we examined the emergence of two types of temporal language in NSL, comparing the linguistic devices for conveying temporal information among three sequential age cohorts of signers. Experiment 1 showed that while all three cohorts of signers could communicate about linearly ordered discrete events, only the second and third generations of signers successfully communicated information about events with more complex temporal structure. Experiment 2 showed that signers could discriminate between the types of temporal events in a nonverbal task. Finally, Experiment 3 investigated the ordinal use of numbers (e.g., first, second) in NSL signers, indicating that one strategy younger signers might have for accurately describing events in time might be to use ordinal numbers to mark each event. While the capacity for representing temporal concepts appears to be present in the human mind from the onset of language creation, the linguistic devices to convey temporality do not appear immediately. Evidently, temporal language emerges over generations of language transmission, as a product of individual minds interacting within a community of users. Copyright © 2016 Elsevier B.V. All rights reserved.
Full Text Available Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL. As a case study, we examine reduplication (X→XX—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating, and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task. The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
Journal of Language, Technology & Entrepreneurship in Africa ... This article reviews some of the challenges and options that translators have to contend with in the dicey game of reconstructing intended ... AJOL African Journals Online.
The language policy is usually inferred from the language practices that characterise various spheres of life. This article attempts to show how the language policy, which primarily influences text production in the country, has nurtured translation practice. The dominating role of English sees many texts, particularly technical ...
Barnes, Susan Kubic
Teaching sign language--to deaf or other children with special needs or to hearing children with hard-of-hearing family members--is not new. Teaching sign language to typically developing children has become increasingly popular since the publication of "Baby Signs"[R] (Goodwyn & Acredolo, 1996), now in its third edition. Attention to signing with…
Orfanidou, E.; McQueen, J.M.; Adam, R.; Morgan, G.
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous
Haug, Tobias; Mann, Wolfgang
Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another.
Yang, Su; Zhu, Qing
The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.
Shield, Aaron; Cooley, Frances; Meier, Richard P.
Purpose: We present the first study of echolalia in deaf, signing children with autism spectrum disorder (ASD). We investigate the nature and prevalence of sign echolalia in native-signing children with ASD, the relationship between sign echolalia and receptive language, and potential modality differences between sign and speech. Method: Seventeen…
McKee, Michael M.; Paasche-Orlow, Michael; Winters, Paul C.; Fiscella, Kevin; Zazove, Philip; Sen, Ananda; Pearson, Thomas
Communication and language barriers isolate Deaf American Sign Language (ASL) users from mass media, healthcare messages, and health care communication, which when coupled with social marginalization, places them at a high risk for inadequate health literacy. Our objectives were to translate, adapt, and develop an accessible health literacy instrument in ASL and to assess the prevalence and correlates of inadequate health literacy among Deaf ASL users and hearing English speakers using a cross-sectional design. A total of 405 participants (166 Deaf and 239 hearing) were enrolled in the study. The Newest Vital Sign was adapted, translated, and developed into an ASL version of the NVS (ASL-NVS). Forty-eight percent of Deaf participants had inadequate health literacy, and Deaf individuals were 6.9 times more likely than hearing participants to have inadequate health literacy. The new ASL-NVS, available on a self-administered computer platform, demonstrated good correlation with reading literacy. The prevalence of Deaf ASL users with inadequate health literacy is substantial, warranting further interventions and research. PMID:26513036
Komakula, Sirisha. T.; Burr, Robert. B.; Lee, James N.; Anderson, Jeffrey
We present a case of right hemispheric dominance for sign language but left hemispheric dominance for reading, in a left-handed deaf patient with epilepsy and left mesial temporal sclerosis. Atypical language laterality for ASL was determined by preoperative fMRI, and congruent with ASL modified WADA testing. We conclude that reading and sign language can have crossed dominance and preoperative fMRI evaluation of deaf patients should include both reading and sign language evaluations.
Machine translation systems often incorporate modeling assumptions motivated by properties of the language pairs they initially target. When such systems are applied to language families with considerably different properties, translation quality can deteriorate. Phrase-based machine translation
Halim, Zahid; Abbas, Ghulam
Sign language provides hearing and speech impaired individuals with an interface to communicate with other members of the society. Unfortunately, sign language is not understood by most of the common people. For this, a gadget based on image processing and pattern recognition can provide with a vital aid for detecting and translating sign language into a vocal language. This work presents a system for detecting and understanding the sign language gestures by a custom built software tool and later translating the gesture into a vocal language. For the purpose of recognizing a particular gesture, the system employs a Dynamic Time Warping (DTW) algorithm and an off-the-shelf software tool is employed for vocal language generation. Microsoft(®) Kinect is the primary tool used to capture video stream of a user. The proposed method is capable of successfully detecting gestures stored in the dictionary with an accuracy of 91%. The proposed system has the ability to define and add custom made gestures. Based on an experiment in which 10 individuals with impairments used the system to communicate with 5 people with no disability, 87% agreed that the system was useful.
Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas
This article presents findings from the first major study to investigate lexical variation and change in British Sign Language (BSL) number signs. As part of the BSL Corpus Project, number sign variants were elicited from 249 deaf signers from eight sites throughout the UK. Age, school location, and language background were found to be significant…
Ethiopian Sign Language utilizes a fingerspelling system that represents Amharic orthography. Just as each character of the Amharic abugida encodes a consonant-vowel sound pair, each sign in the Ethiopian Sign Language fingerspelling system uses handshape to encode a base consonant, as well as a combination of timing, placement, and orientation to…
Shaw, Emily; Delaporte, Yves
Examinations of the etymology of American Sign Language have typically involved superficial analyses of signs as they exist over a short period of time. While it is widely known that ASL is related to French Sign Language, there has yet to be a comprehensive study of this historic relationship between their lexicons. This article presents…
Bochner, Joseph H.; Samar, Vincent J.; Hauser, Peter C.; Garrison, Wayne M.; Searls, J. Matt; Sanders, Cynthia A.
American Sign Language (ASL) is one of the most commonly taught languages in North America. Yet, few assessment instruments for ASL proficiency have been developed, none of which have adequately demonstrated validity. We propose that the American Sign Language Discrimination Test (ASL-DT), a recently developed measure of learners' ability to…
Stamp, Rose; Schembri, Adam; Evans, Bronwen G.; Cormier, Kearsy
Short-term linguistic accommodation has been observed in a number of spoken language studies. The first of its kind in sign language research, this study aims to investigate the effects of regional varieties in contact and lexical accommodation in British Sign Language (BSL). Twenty-five participants were recruited from Belfast, Glasgow,…
de Quadros, Ronice Muller
This article explains the consolidation of Brazilian Sign Language in Brazil through a linguistic plan that arose from the Brazilian Sign Language Federal Law 10.436 of April 2002 and the subsequent Federal Decree 5695 of December 2005. Two concrete facts that emerged from this existing language plan are discussed: the implementation of bilingual…
This article examines several legal cases in Canada, the USA, and Australia involving signed language in education for Deaf students. In all three contexts, signed language rights for Deaf students have been viewed from within a disability legislation framework that either does not extend to recognizing language rights in education or that…
In this paper we describe topic marking in Russian Sign Language (RSL) and Sign Language of the Netherlands (NGT) and discuss whether these languages should be considered topic prominent. The formal markers of topics in RSL are sentence-initial position, a prosodic break following the topic, and
Marshall, Chloë; Mason, Kathryn; Rowley, Katherine; Herman, Rosalind; Atkinson, Joanna; Woll, Bencie; Morgan, Gary
Children with specific language impairment (SLI) perform poorly on sentence repetition tasks across different spoken languages, but until now, this methodology has not been investigated in children who have SLI in a signed language. Users of a natural sign language encode different sentence meanings through their choice of signs and by altering…
Russian Sign Language (RSL) makes use of constructions involving manual simultaneity, in particular, weak hand holds, where one hand is being held in the location and configuration of a sign, while the other simultaneously produces one sign or a sequence of several signs. In this paper, I argue that
De Meulder, Maartje
Through the British Sign Language (Scotland) Act, British Sign Language (BSL) was given legal status in Scotland. The main motives for the Act were a desire to put BSL on a similar footing with Gaelic and the fact that in Scotland, BSL signers are the only group whose first language is not English who must rely on disability discrimination…
Full Text Available Since time immemorial, philosophers and scientists were searching for a “machine code” of the so-called Mentalese language capable of processing information at the pre-verbal, pre-expressive level. In this paper I suggest that human languages are only secondary to the system of primitive extra-linguistic signs which are hardwired in humans and serve as tools for understanding selves and others; and creating meanings for the multiplicity of experiences. The combinatorial semantics of the Mentalese may find its unorthodox expression in the semiotic system of Tarot images, the latter serving as the ”keys” to the encoded proto-mental information. The paper uses some works in systems theory by Erich Jantsch and Erwin Laszlo and relates Tarot images to the archetypes of the field of collective unconscious posited by Carl Jung. Our subconscious beliefs, hopes, fears and desires, of which we may be unaware at the subjective level, do have an objective compositional structure that may be laid down in front of our eyes in the format of pictorial semiotics representing the universe of affects, thoughts, and actions. Constructing imaginative narratives based on the expressive “language” of Tarot images enables us to anticipate possible consequences and consider a range of future options. The thesis advanced in this paper is also supported by the concept of informational universe of contemporary cosmology.
Cempre, Luka; Bešir, Aleksander; Solina, Franc
The article describes technical and user-interface issues of transferring the contents and functionality of the CD-ROM version of the Slovenian sing language dictionary to the web. The dictionary of Slovenian sign language consist of video clips showing the demonstra- tion of signs that deaf people use for communication, text description of the words corresponding to the signs and pictures illustrating the same word/sign. A new technical solution—a video sprite—for concatenating subsections o...
Vesel, J.; Hurdich, J.
TERC and Vcom3D used the SigningAvatar® accessibility software to research and develop a Signing Earth Science Dictionary (SESD) of approximately 750 standards-based Earth science terms for high school students who are deaf and hard of hearing and whose first language is sign. The partners also evaluated the extent to which use of the SESD furthers understanding of Earth science content, command of the language of Earth science, and the ability to study Earth science independently. Disseminated as a Web-based version and App, the SESD is intended to serve the ~36,000 grade 9-12 students who are deaf or hard of hearing and whose first language is sign, the majority of whom leave high school reading at the fifth grade or below. It is also intended for teachers and interpreters who interact with members of this population and professionals working with Earth science education programs during field trips, internships etc. The signed SESD terms have been incorporated into a Mobile Communication App (MCA). This App for Androids is intended to facilitate communication between English speakers and persons who communicate in American Sign Language (ASL) or Signed English. It can translate words, phrases, or whole sentences from written or spoken English to animated signing. It can also fingerspell proper names and other words for which there are no signs. For our presentation, we will demonstrate the interactive features of the SigningAvatar® accessibility software that support the three principles of Universal Design for Learning (UDL) and have been incorporated into the SESD and MCA. Results from national field-tests will provide insight into the SESD's and MCA's potential applicability beyond grade 12 as accommodations that can be used for accessing the vocabulary deaf and hard of hearing students need for study of the geosciences and for facilitating communication about content. This work was funded in part by grants from NSF and the U.S. Department of Education.
Johnson, William L
Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...
McKee, Rachel Locker; Manning, Victoria
Status planning through legislation made New Zealand Sign Language (NZSL) an official language in 2006. But this strong symbolic action did not create resources or mechanisms to further the aims of the act. In this article we discuss the extent to which legal recognition and ensuing language-planning activities by state and community have affected…
Johnston, Trevor; van Roekel, Jane; Schembri, Adam
This study investigates the conventionalization of mouth actions in Australian Sign Language. Signed languages were once thought of as simply manual languages because the hands produce the signs which individually and in groups are the symbolic units most easily equated with the words, phrases and clauses of spoken languages. However, it has long been acknowledged that non-manual activity, such as movements of the body, head and the face play a very important role. In this context, mouth actions that occur while communicating in signed languages have posed a number of questions for linguists: are the silent mouthings of spoken language words simply borrowings from the respective majority community spoken language(s)? Are those mouth actions that are not silent mouthings of spoken words conventionalized linguistic units proper to each signed language, culturally linked semi-conventional gestural units shared by signers with members of the majority speaking community, or even gestures and expressions common to all humans? We use a corpus-based approach to gather evidence of the extent of the use of mouth actions in naturalistic Australian Sign Language-making comparisons with other signed languages where data is available--and the form/meaning pairings that these mouth actions instantiate.
Ortega, Gerardo; Morgan, Gary
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult learners of a sign language, however, cannot fall back…
The aim of this article is to describe a negative prefix, NEG-, in Polish Sign Language (PJM) which appears to be indigenous to the language. This is of interest given the relative rarity of prefixes in sign languages. Prefixed PJM signs were analyzed on the basis of both a corpus of texts signed by 15 deaf PJM users who are either native or near-native signers, and material including a specified range of prefixed signs as demonstrated by native signers in dictionary form (i.e. signs produced in isolation, not as part of phrases or sentences). In order to define the morphological rules behind prefixation on both the phonological and morphological levels, native PJM users were consulted for their expertise. The research results can enrich models for describing processes of grammaticalization in the context of the visual-gestural modality that forms the basis for sign language structure.
Geers, Ann E; Mitchell, Christine M; Warner-Czyz, Andrea; Wang, Nae-Yuh; Eisenberg, Laurie S
Most children with hearing loss who receive cochlear implants (CI) learn spoken language, and parents must choose early on whether to use sign language to accompany speech at home. We address whether parents' use of sign language before and after CI positively influences auditory-only speech recognition, speech intelligibility, spoken language, and reading outcomes. Three groups of children with CIs from a nationwide database who differed in the duration of early sign language exposure provided in their homes were compared in their progress through elementary grades. The groups did not differ in demographic, auditory, or linguistic characteristics before implantation. Children without early sign language exposure achieved better speech recognition skills over the first 3 years postimplant and exhibited a statistically significant advantage in spoken language and reading near the end of elementary grades over children exposed to sign language. Over 70% of children without sign language exposure achieved age-appropriate spoken language compared with only 39% of those exposed for 3 or more years. Early speech perception predicted speech intelligibility in middle elementary grades. Children without sign language exposure produced speech that was more intelligible (mean = 70%) than those exposed to sign language (mean = 51%). This study provides the most compelling support yet available in CI literature for the benefits of spoken language input for promoting verbal development in children implanted by 3 years of age. Contrary to earlier published assertions, there was no advantage to parents' use of sign language either before or after CI. Copyright © 2017 by the American Academy of Pediatrics.
Ferjan Ramirez, Naja; Leonard, Matthew K; Davenport, Tristan S; Torres, Christina; Halgren, Eric; Mayberry, Rachel I
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Thompson, Robin L; Vinson, David P; Woll, Bencie; Vigliocco, Gabriella
An arbitrary link between linguistic form and meaning is generally considered a universal feature of language. However, iconic (i.e., nonarbitrary) mappings between properties of meaning and features of linguistic form are also widely present across languages, especially signed languages. Although recent research has shown a role for sign iconicity in language processing, research on the role of iconicity in sign-language development has been mixed. In this article, we present clear evidence that iconicity plays a role in sign-language acquisition for both the comprehension and production of signs. Signed languages were taken as a starting point because they tend to encode a higher degree of iconic form-meaning mappings in their lexicons than spoken languages do, but our findings are more broadly applicable: Specifically, we hypothesize that iconicity is fundamental to all languages (signed and spoken) and that it serves to bridge the gap between linguistic form and human experience.
Moreno, Antonio; Limousin, Fanny; Dehaene, Stanislas; Pallier, Christophe
During sentence processing, areas of the left superior temporal sulcus, inferior frontal gyrus and left basal ganglia exhibit a systematic increase in brain activity as a function of constituent size, suggesting their involvement in the computation of syntactic and semantic structures. Here, we asked whether these areas play a universal role in language and therefore contribute to the processing of non-spoken sign language. Congenitally deaf adults who acquired French sign language as a first language and written French as a second language were scanned while watching sequences of signs in which the size of syntactic constituents was manipulated. An effect of constituent size was found in the basal ganglia, including the head of the caudate and the putamen. A smaller effect was also detected in temporal and frontal regions previously shown to be sensitive to constituent size in written language in hearing French subjects (Pallier et al., 2011). When the deaf participants read sentences versus word lists, the same network of language areas was observed. While reading and sign language processing yielded identical effects of linguistic structure in the basal ganglia, the effect of structure was stronger in all cortical language areas for written language relative to sign language. Furthermore, cortical activity was partially modulated by age of acquisition and reading proficiency. Our results stress the important role of the basal ganglia, within the language network, in the representation of the constituent structure of language, regardless of the input modality. Copyright © 2017 Elsevier Inc. All rights reserved.
This article explores the role of the Deaf child as peer educator. In schools where sign languages were banned, Deaf children became the educators of their Deaf peers in a number of contexts worldwide. This paper analyses how this peer education of sign language worked in context by drawing on two examples from boarding schools for the deaf in…
Log in or Register to get access to full text downloads. ... Poetry in a sign language can make use of literary devices just as poetry in a ... This poem illustrates well the multi-layered meaning that can be created in sign language poetry through ...
Russo, Tommaso; Giuranna, Rosaria; Pizzuto, Elena
Explores and describes from a crosslinguistic perspective, some of the major structural irregularities that characterize poetry in Italian Sign Language and distinguish poetic from nonpoetic texts. Reviews findings of previous studies of signed language poetry, and points out issues that need to be clarified to provide a more accurate description…
Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.
This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.
Lucas, Ceil; Mirus, Gene; Palmer, Jeffrey Levi; Roessler, Nicholas James; Frost, Adam
This paper first reviews the fairly established ways of collecting sign language data. It then discusses the new technologies available and their impact on sign language research, both in terms of how data is collected and what new kinds of data are emerging as a result of technology. New data collection methods and new kinds of data are…
This systematic review of the literature provides a synthesis of research on the use of technology to support sign language. Background research on the use of sign language with students who are deaf/hard of hearing and students with low incidence disabilities, such as autism, intellectual disability, or communication disorders is provided. The…
Armstrong, David F.
As most readers of this journal are aware, "Sign Language Studies" ("SLS") served for many years as effectively the only serious scholarly outlet for work in the nascent field of sign language linguistics. Now reaching its 40th anniversary, the journal was founded by William C. Stokoe and then edited by him for the first quarter century of its…
Kimmelman, V.; Vink, L.
Several sign languages of the world utilize a construction that consists of a question followed by an answer, both of which are produced by the same signer. For American Sign Language, this construction has been analyzed as a discourse-level rhetorical question construction (Hoza et al. 1997), as a
De Clerck, Goedele A. M.
This article has been excerpted from "Introduction: Sign Language, Sustainable Development, and Equal Opportunities" (De Clerck) in "Sign Language, Sustainable Development, and Equal Opportunities: Envisioning the Future for Deaf Students" (G. A. M. De Clerck & P. V. Paul (Eds.) 2016). The idea of exploring various…
Cristina Broglia Feitosa de Lacerda
Full Text Available Este trabalho traz resultados de uma investigação mais ampla junto a tradutores intérpretes de Língua Brasileira de Sinais (TILS que atuam no Ensino Superior (ES. Destaca o perfil de profissionais que hoje exercem este trabalho na academia, mostrando um pouco as diversas realidades de diferentes regiões em que atuam, faixa etária, formação, como começaram ou se tornaram TILS, como iniciaram seus trabalhos nas Instituições de Ensino Superior (IES, dentre outras. Nesse contexto, destacam-se principalmente aspectos de suas formações e práticas. A investigação se baseou em entrevistas e os resultados variam bastante, demonstrando que há diferentes perfis e especificidades nos processos de escolha para atuarem nesta profissão como intérpretes. Pensando no atual contexto universitário brasileiro e na atual política educacional que defende a inclusão da pessoa com deficiência frequentando cursos superiores, e neste caso, estudantes surdos, cabe destacar que esta inclusão demanda a presença de um profissional para mediar as relações de comunicação entre surdos e ouvintes, favorecendo sua construção de conhecimento no espaço educacional. Entre os profissionais que atuam na efetivação de práticas de educação inclusiva encontram-se os TILS, o que é previsto pelo Decreto 5.626, responsável pela acessibilidade linguística dos alunos surdos que frequentam parte da Educação Básica e Ensino Superior, interpretando do Português para a LIBRAS e vice-versa. Conhecer melhor os caminhos e o perfil dos TILS e a sua atuação no ES, pode contribuir para a reflexão acerca das necessidades de formação deste profissional para atuar no processo de inclusão bilíngue de estudantes surdos em nível superior.This article presents results from a broader study with LIBRAS - Brazilian Sign Language translators/interpreters (BSLIs that work in Higher Education (HE. The study highlights the profile of professionals who work in
Peng, Fred C. C., Ed.
A collection of research materials on sign language and primatology is presented here. The essays attempt to show that: sign language is a legitimate language that can be learned not only by humans but by nonhuman primates as well, and nonhuman primates have the capability to acquire a human language using a different mode. The following…
Monney, M. (Mariette)
Abstract Finding new methods to achieve the goals of Education For All is a constant worry for primary school teachers. Multisensory methods have been proved to be efficient in the past decades. Sign Language, being a visual and kinesthetic language, could become a future educational tool to fulfill the needs of a growing diversity of learners. This ethnographic study describes how Sign Language exposure in inclusive classr...
Akmese, Pelin Pistav
Being hearing impaired limits one's ability to communicate in that it affects all areas of development, particularly speech. One of the methods the hearing impaired use to communicate is sign language. This study, a descriptive study, intends to examine the opinions of individuals who had enrolled in a sign language certification program by using…
Tyrone, Martha E; Mauk, Claude E
Because the primary articulators for sign languages are the hands, sign phonology and phonetics have focused mainly on them and treated other articulators as passive targets. However, there is abundant research on the role of nonmanual articulators in sign language grammar and prosody. The current study examines how hand and head/body movements are coordinated to realize phonetic targets. Kinematic data were collected from 5 deaf American Sign Language (ASL) signers to allow the analysis of movements of the hands, head and body during signing. In particular, we examine how the chin, forehead and torso move during the production of ASL signs at those three phonological locations. Our findings suggest that for signs with a lexical movement toward the head, the forehead and chin move to facilitate convergence with the hand. By comparison, the torso does not move to facilitate convergence with the hand for signs located at the torso. These results imply that the nonmanual articulators serve a phonetic as well as a grammatical or prosodic role in sign languages. Future models of sign phonetics and phonology should take into consideration the movements of the nonmanual articulators in the realization of signs. © 2016 S. Karger AG, Basel.
Williams, Joshua T.; Newman, Sharlene D.
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately…
Stone, Adam; Petitto, Laura-Ann; Bosworth, Rain
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing…
Sprenger, Kristen; Mathur, Gaurav
This article focuses on the syntactic level of the grammar of Saudi Arabian Sign Language by exploring some word orders that occur in personal narratives in the language. Word order is one of the main ways in which languages indicate the main syntactic roles of subjects, verbs, and objects; others are verbal agreement and nominal case morphology.…
Many Australian Aboriginal people use a sign language ("hand talk") that mirrors their local spoken language and is used both in culturally appropriate settings when speech is taboo or counterindicated and for community communication. The characteristics of these languages are described, and early European settlers' reports of deaf…
Baus, Cristina; Gutiérrez, Eva; Carreiras, Manuel
The aim of the present study was to investigate the functional role of syllables in sign language and how the different phonological combinations influence sign production. Moreover, the influence of age of acquisition was evaluated. Deaf signers (native and non-native) of Catalan Signed Language (LSC) were asked in a picture-sign interference task to sign picture names while ignoring distractor-signs with which they shared two phonological parameters (out of three of the main sign parameters: Location, Movement, and Handshape). The results revealed a different impact of the three phonological combinations. While no effect was observed for the phonological combination Handshape-Location, the combination Handshape-Movement slowed down signing latencies, but only in the non-native group. A facilitatory effect was observed for both groups when pictures and distractors shared Location-Movement. Importantly, linguistic models have considered this phonological combination to be a privileged unit in the composition of signs, as syllables are in spoken languages. Thus, our results support the functional role of syllable units during phonological articulation in sign language production.
Gameiro, João Manuel Ferreira
Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar soluti...
Despite being minority languages like many others, sign languages have traditionally remained absent from the agendas of policy makers and language planning and policies. In the past two decades, though, this situation has started to change at different paces and to different degrees in several countries. In this article, the author describes the…
... is n example of a contemporary sign language dictionary that leverages the 21st ... informed development of this bilingual, bi-directional, multimedia dictionary. ... and dealing with sociolinguistic variation in the selection and performance of ...
Hollman, Liivi; Sutrop, Urmas
The article is written in the tradition of Brent Berlin and Paul Kay's theory of basic color terms. According to this theory there is a universal inventory of eleven basic color categories from which the basic color terms of any given language are always drawn. The number of basic color terms varies from 2 to 11 and in a language having a fully…
Full Text Available This paper explores theatrical interpreting for Deaf spectators, a specialism that both blurs the separation between translation and interpreting, and replaces these potentials with a paradigm in which the translator's body is central to the production of the target text. Meaningful written translations of dramatic texts into sign language are not currently possible. For Deaf people to access Shakespeare or Moliere in their own language usually means attending a sign language interpreted performance, a typically disappointing experience that fails to provide accessibility or to fulfil the potential of a dynamically equivalent theatrical translation. I argue that when such interpreting events fail, significant contributory factors are the challenges involved in producing such a target text and the insufficient embodiment of that text. The second of these factors suggests that the existing conference and community models of interpreting are insufficient in describing theatrical interpreting. I propose that a model drawn from Theatre Studies, namely psychophysical acting, might be more effective for conceptualising theatrical interpreting. I also draw on theories from neurological research into the Mirror Neuron System to suggest that a highly visual and physical approach to performance (be that by actors or interpreters is more effective in building a strong actor-spectator interaction than a performance in which meaning is conveyed by spoken words. Arguably this difference in language impact between signed and spoken is irrelevant to hearing audiences attending spoken language plays, but I suggest that for all theatre translators the implications are significant: it is not enough to create a literary translation as the target text; it is also essential to produce a text that suggests physicality. The aim should be the creation of a text which demands full expression through the body, the best picture of the human soul and the fundamental medium
Orfanidou, Eleni; McQueen, James M; Adam, Robert; Morgan, Gary
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.
Caselli, Naomi K; Cohen-Goldberg, Ariel M
PSYCHOLINGUISTIC THEORIES HAVE PREDOMINANTLY BEEN BUILT UPON DATA FROM SPOKEN LANGUAGE, WHICH LEAVES OPEN THE QUESTION: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012) presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012), and show that if this architecture is elaborated to incorporate relatively minor facts about either (1) the time course of sign perception or (2) the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Naomi Kenney Caselli
Full Text Available Psycholinguistic theories have predominantly been built upon data from spoken language, which leaves open the question: How many of the conclusions truly reflect language-general principles as opposed to modality-specific ones? We take a step toward answering this question in the domain of lexical access in recognition by asking whether a single cognitive architecture might explain diverse behavioral patterns in signed and spoken language. Chen and Mirman (2012 presented a computational model of word processing that unified opposite effects of neighborhood density in speech production, perception, and written word recognition. Neighborhood density effects in sign language also vary depending on whether the neighbors share the same handshape or location. We present a spreading activation architecture that borrows the principles proposed by Chen and Mirman (2012, and show that if this architecture is elaborated to incorporate relatively minor facts about either 1 the time course of sign perception or 2 the frequency of sub-lexical units in sign languages, it produces data that match the experimental findings from sign languages. This work serves as a proof of concept that a single cognitive architecture could underlie both sign and word recognition.
Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.
Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.
Shield, Aaron; Meier, Richard P.; Tager-Flusberg, Helen
We report the first study on pronoun use by an under-studied research population, children with autism spectrum disorder (ASD) exposed to American Sign Language from birth by their deaf parents. Personal pronouns cause difficulties for hearing children with ASD, who sometimes reverse or avoid them. Unlike speech pronouns, sign pronouns are…
Young, Lesa; Palmer, Jeffrey Levi; Reynolds, Wanette
This combined paper will focus on the description of two selected lexical patterns in Saudi Arabian Sign Language (SASL): metaphor and metonymy in emotion-related signs (Young) and lexicalization patterns of objects and their derivational roots (Palmer and Reynolds). The over-arcing methodology used by both studies is detailed in Stephen and…
Ritchings, Tim; Khadragi, Ahmed; Saeb, Magdy
A computer-based system for sign language tutoring has been developed using a low-cost data glove and a software application that processes the movement signals for signs in real-time and uses Pattern Matching techniques to decide if a trainee has closely replicated a teacher's recorded movements. The data glove provides 17 movement signals from…
Mary Theresa Biberauer
The study of literary expression in sign languages has increased over the last twenty .... extensively to express emotion on the part of a character in the narrative. ... township in her non-manual facial expressions while signing manually what is ...
Benedicto, E.; Cvejanov, S.; Quer, J.; Quer, J.F.
This paper provides a comparative analysis of the structural properties of serial verb constructions (SVC) in three sign languages: LSA (Lengua de Señas Argentina, Argentinean Sign Language), LSC (Llengua de Signes Catalana, Catalan Sign Language) and ASL (American Sign Language). The paper presents
Santos, Hudson P O; Black, Amanda M; Sandelowski, Margarete
Although there is increased understanding of language barriers in cross-language studies, the point at which language transformation processes are applied in research is inconsistently reported, or treated as a minor issue. Differences in translation timeframes raise methodological issues related to the material to be translated, as well as for the process of data analysis and interpretation. In this article we address methodological issues related to the timing of translation from Portuguese to English in two international cross-language collaborative research studies involving researchers from Brazil, Canada, and the United States. One study entailed late-phase translation of a research report, whereas the other study involved early phase translation of interview data. The timing of translation in interaction with the object of translation should be considered, in addition to the language, cultural, subject matter, and methodological competencies of research team members. © The Author(s) 2014.
A South African Sign Language Dictionary for Families with Young Deaf Children (SLED 2006) was used with permission ... Figure 1: Syllable structure of a CVC syllable in the word “bed”. In spoken languages .... often than not, there is a societal emphasis on 'fixing' a child's deafness and attempting to teach deaf children to ...
Attitudes are complex and little research in the field of linguistics has focused on language attitudes. This article deals with attitudes toward sign languages and those who use them--attitudes that are influenced by ideological constructions. The article reviews five categories of such constructions and discusses examples in each one.
This article discusses several aspects of language planning with respect to Sign Language of the Netherlands, or Nederlandse Gebarentaal (NGT). For nearly thirty years members of the Deaf community, the Dutch Deaf Council (Dovenschap) have been working together with researchers, several organizations in deaf education, and the organization of…
Chaveiro, Neuma; Duarte, Soraya Bianca Reis; Freitas, Adriana Ribeiro de; Barbosa, Maria Alves; Porto, Celmo Celeno; Fleck, Marcelo Pio de Almeida
To construct versions of the WHOQOL-BREF and WHOQOL-DIS instruments in Brazilian sign language to evaluate the Brazilian deaf population's quality of life. The methodology proposed by the World Health Organization (WHOQOL-BREF and WHOQOL-DIS) was used to construct instruments adapted to the deaf community using Brazilian Sign Language (Libras). The research for constructing the instrument took placein 13 phases: 1) creating the QUALITY OF LIFE sign; 2) developing the answer scales in Libras; 3) translation by a bilingual group; 4) synthesized version; 5) first back translation; 6) production of the version in Libras to be provided to the focal groups; 7) carrying out the Focal Groups; 8) review by a monolingual group; 9) revision by the bilingual group; 10) semantic/syntactic analysis and second back translation; 11) re-evaluation of the back translation by the bilingual group; 12) recording the version into the software; 13) developing the WHOQOL-BREF and WHOQOL-DIS software in Libras. Characteristics peculiar to the culture of the deaf population indicated the necessity of adapting the application methodology of focal groups composed of deaf people. The writing conventions of sign languages have not yet been consolidated, leading to difficulties in graphically registering the translation phases. Linguistics structures that caused major problems in translation were those that included idiomatic Portuguese expressions, for many of which there are no equivalent concepts between Portuguese and Libras. In the end, it was possible to create WHOQOL-BREF and WHOQOL-DIS software in Libras. The WHOQOL-BREF and the WHOQOL-DIS in Libras will allow the deaf to express themselves about their quality of life in an autonomous way, making it possible to investigate these issues more accurately.
Ong, Wing S. S
Sequoyah, which is the Department of Defense (DoD)'s Program of Record for automated foreign language translation, is to identify current and developing technologies to meet warfighter requirements for foreign language support...
Singha, Joyeeta; Das, Karen
Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.
Ben Jmaa, Ahmed; Mahdi, Walid; Ben Jemaa, Yousra; Ben Hamadou, Abdelmajid
We present in this paper a new approach for Arabic sign language (ArSL) alphabet recognition using hand gesture analysis. This analysis consists in extracting a histogram of oriented gradient (HOG) features from a hand image and then using them to generate an SVM Models. Which will be used to recognize the ArSL alphabet in real-time from hand gesture using a Microsoft Kinect camera. Our approach involves three steps: (i) Hand detection and localization using a Microsoft Kinect camera, (ii) hand segmentation and (iii) feature extraction using Arabic alphabet recognition. One each input image first obtained by using a depth sensor, we apply our method based on hand anatomy to segment hand and eliminate all the errors pixels. This approach is invariant to scale, to rotation and to translation of the hand. Some experimental results show the effectiveness of our new approach. Experiment revealed that the proposed ArSL system is able to recognize the ArSL with an accuracy of 90.12%.
Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming
This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…
The article discusses word order, the syntactic arrangement of words in a sentence, clause, or phrase as one of the most crucial aspects of grammar of any spoken language. It aims to investigate the order of the primary constituents which can either be subject, object, or verb of a simple
Full Text Available A place name sign is a linguistic-cultural marker that includes both memory and landscape. The author regards toponymic signs in Estonian Sign Language as representations of images held by the Estonian Deaf community: they reflect the geographical place, the period, the relationships of the Deaf community with hearing community, and the common and distinguishing features of the two cultures perceived by community's members. Name signs represent an element of signlore, which includes various types of creative linguistic play. There are stories hidden behind the place name signs that reveal the etymological origin of place name signs and reflect the community's memory. The purpose of this article is twofold. Firstly, it aims to introduce Estonian place name signs as Deaf signlore forms, analyse their structure and specify the main formation methods. Secondly, it interprets place-denoting signs in the light of understanding the foundations of Estonian Sign Language, Estonian Deaf education and education history, the traditions of local Deaf communities, and also of the cultural and local traditions of the dominant hearing communities. Both perspectives - linguistic and folkloristic - are represented in the current article.
Rudner, Mary; Andin, Josefine; Rönnberg, Jerker; Heimann, Mikael; Hermansson, Anders; Nelson, Keith; Tjus, Tomas
The literacy skills of deaf children generally lag behind those of their hearing peers. The mechanisms of reading in deaf individuals are only just beginning to be unraveled but it seems that native language skills play an important role. In this study 12 deaf pupils (six in grades 1-2 and six in grades 4-6) at a Swedish state primary school for…
Full Text Available Sign in sign language, equivalent to the word, phrase or a sentence in the oral-language, can be divided in linguistic units of lower levels: shape of the hand, place of articulation, type of movement and orientation of the palm. The first description of these units, which today is present and applicable in Bosnia and Herzegovina (B&H, was given by Zimmerman in 1986, who found 27 shapes of hand, while other types were not systematically developed or described. The target of this study was to determine the possible existence of other forms of hand movements present in sign language in B&H. By the method of content analysis, the 425 analyzed signs in sign launguage in B&H, confirmed their existence, but we also discovered and presented 14 new shapes of the hand. This way, we confirmed the need of implementing a detailed research, standardization and publishing of sign language in B&H, which would provide adequate conditions for its study and application, as for the deaf, and all the others who come into direct contact with them.
Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory
The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.
Parton, Becky Sue
Foreign sign language instruction is an important, but overlooked area of study. Thus the purpose of this paper was two-fold. First, the researcher sought to determine the level of knowledge and interest in foreign sign language among Deaf teenagers along with their learning preferences. Results from a survey indicated that over a third of the…
Barberà, Gemma; Zwets, Martine
In both signed and spoken languages, pointing serves to direct an addressee's attention to a particular entity. This entity may be either present or absent in the physical context of the conversation. In this article we focus on pointing directed to nonspeaker/nonaddressee referents in Sign Language of the Netherlands (Nederlandse Gebarentaal,…
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into
Full Text Available Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU model (Rönnberg et al., 2013 pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL than unfamiliar British Sign Language (BSL signs, and that both groups would be better at imitating lexical signs (SSL and BSL than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1 we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2. Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at the T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills
This paper argues that Dev Virahsawmy, an author who manipulates literary translation for the purposes of linguistic prestige formation and re-negotiation, is a critical language-policy practitioner, as his work fills an important gap in language planning scholarship. A micro-analysis of the translation of a Shakespearean sonnet into Mauritian…
Eaton, Sarah Elaine
This guidebook for teachers documents the "Harry Potter in Translation" project undertaken at the Language Research Centre at the University of Calgary. The guide also offers 5 sample lesson plans for teachers of grades three to twelve for teaching world languages using the Harry Potter books in translation to engage students. (Contains…
This study makes out a case for the thorny problem of literary translation into Nigeria's indigenous languages and its role in national development. In this paper, we outline the way forward given the fact that literary translation into Nigerian languages had gone through a sticky patch. Federal, State and Local governments in ...
Malaia, Evie; Borneman, Joshua D; Wilbur, Ronnie B
The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.
Corina, David P; Knapp, Heather
In this paper we review evidence for frontal and parietal lobe involvement in sign language comprehension and production, and evaluate the extent to which these data can be interpreted within the context of a mirror neuron system for human action observation and execution. We present data from three literatures--aphasia, cortical stimulation, and functional neuroimaging. Generally, we find support for the idea that sign language comprehension and production can be viewed in the context of a broadly-construed frontal-parietal human action observation/execution system. However, sign language data cannot be fully accounted for under a strict interpretation of the mirror neuron system. Additionally, we raise a number of issues concerning the lack of specificity in current accounts of the human action observation/execution system.
Baus, Cristina; Costa, Albert
This study investigates the temporal dynamics of sign production and how particular aspects of the signed modality influence the early stages of lexical access. To that end, we explored the electrophysiological correlates associated to sign frequency and iconicity in a picture signing task in a group of bimodal bilinguals. Moreover, a subset of the same participants was tested in the same task but naming the pictures instead. Our results revealed that both frequency and iconicity influenced lexical access in sign production. At the ERP level, iconicity effects originated very early in the course of signing (while absent in the spoken modality), suggesting a stronger activation of the semantic properties for iconic signs. Moreover, frequency effects were modulated by iconicity, suggesting that lexical access in signed language is determined by the iconic properties of the signs. These results support the idea that lexical access is sensitive to the same phenomena in word and sign production, but its time-course is modulated by particular aspects of the modality in which a lexical item will be finally articulated. Copyright © 2015 Elsevier B.V. All rights reserved.
Paolo E. Balboni
Full Text Available Literature about translation in language learning and teaching shows the prominence of the ‘for and against’ approach, while a ‘what for’ approach would be more profitable. In order to prevent the latter approach from becoming a random list of the potential benefits of the use of translation in language teaching, this essay suggests the use of a formal model of communicative competence, to see which of its components can profit of translation activities. The result is a map of the effects of translation in the wide range of competences and abilities which constitute language learning.
Under the situation of economic globalization today, the internationalization of advertising is becoming more and more obvious. All enterprises in all countries are meeting the same international, global problem, the problem of advertising translation. When dealing with advertising translation, we should take full account of language habits and cultural background of target customers. Therefore, it turns out to be important that we should be familiar with the language characteristics and translation skills of English advertisements. In this paper, I will introduce the language characteristics of English advertisements from three aspects of words, syntax and rhetorical devices, and introduce skills of advertising translation.
Full Text Available My point of departure for this paper is that translation, so long neglected in foreign language teaching, can not only improve students’ linguistic competences in both a foreign language and their mother tongue, but also their awareness of cultural and intercultural elements. It is a widespread popular assumption, among those not involved in language teaching, that linguistic competences are the key to learning a language and to communicating in a foreign language; consequently, they assume that translation ought to play a major role in the study of a foreign language. Indeed, late 20th century theories of language teaching, apart from the grammar-translation method, have largely ignored or criticized the role of translation. I will focus on a translation course I taught to a class of a year three Italian undergraduate students studying foreign languages, and discuss the advantages of using translation to improve students’ linguistic competences, in their mother tongue and in the foreign language, and to develop their intercultural communicative competences and their cultural (Bassnett, 2002, 2007 and intercultural awareness (Kramsch, 1993, 1998. The translated text was taken from The Simpsons, season 21, episode 16.
Parks, Elizabeth S.
In this paper, I use a holographic metaphor to explain the identification of overlapping sign language communities in Panama. By visualizing Panama's complex signing communities as emitting community "hotspots" through social drama on multiple stages, I employ ethnographic methods to explore overlapping contours of Panama's sign language…
This paper is a thought experiment exploring the possibility of establishing universal bilingualism in Sign Languages. Focusing in the first part on historical examples of inclusive signing societies such as Martha's Vineyard, the author suggests that it is not possible to create such naturally occurring practices of Sign Bilingualism in societies…
Dalawis, Rando C.; Olayao, Kenneth Deniel R.; Ramos, Evan Geoffrey I.; Samonte, Mary Jane C.
A different approach of sign language recognition of static and dynamic hand movements was developed in this study using normalized correlation algorithm. The goal of this research was to translate fingerspelling sign language into text using MATLAB and Microsoft Kinect. Digital input image captured by Kinect devices are matched from template samples stored in a database. This Human Computer Interaction (HCI) prototype was developed to help people with communication disability to express their thoughts with ease. Frame segmentation and feature extraction was used to give meaning to the captured images. Sequential and random testing was used to test both static and dynamic fingerspelling gestures. The researchers explained some factors they encountered causing some misclassification of signs.
Marshall, Chloë R; Morgan, Gary
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.
Øhre, Beate; Saltnes, Hege; von Tetzchner, Stephen; Falkum, Erik
There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen's kappa and "observed agreement". There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and "observed agreement" was above 88%. The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language.
Background There is a need for psychiatric assessment instruments that enable reliable diagnoses in persons with hearing loss who have sign language as their primary language. The objective of this study was to assess the validity of the Norwegian Sign Language (NSL) version of the Mini International Neuropsychiatric Interview (MINI). Methods The MINI was translated into NSL. Forty-one signing patients consecutively referred to two specialised psychiatric units were assessed with a diagnostic interview by clinical experts and with the MINI. Inter-rater reliability was assessed with Cohen’s kappa and “observed agreement”. Results There was 65% agreement between MINI diagnoses and clinical expert diagnoses. Kappa values indicated fair to moderate agreement, and observed agreement was above 76% for all diagnoses. The MINI diagnosed more co-morbid conditions than did the clinical expert interview (mean diagnoses: 1.9 versus 1.2). Kappa values indicated moderate to substantial agreement, and “observed agreement” was above 88%. Conclusion The NSL version performs similarly to other MINI versions and demonstrates adequate reliability and validity as a diagnostic instrument for assessing mental disorders in persons who have sign language as their primary and preferred language. PMID:24886297
Kim, Jonghwa; Wagner, Johannes; Rehm, Matthias
In this paper, we investigate the mutual-complementary functionality of accelerometer (ACC) and electromyogram (EMG) for recognizing seven word-level sign vocabularies in German sign language (GSL). Results are discussed for the single channels and for feature-level fusion for the bichannel senso......-independent condition, where subjective differences do not allow for high recognition rates. Finally we discuss a problem of feature-level fusion caused by high disparity between accuracies of each single channel classification....
Jones, T; Cumberbatch, K
The introduction of the landmark mandatory teaching of sign language to undergraduate dental students at the University of the West Indies (UWI), Mona Campus in Kingston, Jamaica, to bridge the communication gap between dentists and their patients is reviewed. A review of over 90 Doctor of Dental Surgery and Doctor of Dental Medicine curricula in North America, the United Kingdom, parts of Europe and Australia showed no inclusion of sign language in those curricula as a mandatory component. In Jamaica, the government's training school for dental auxiliaries served as the forerunner to the UWI's introduction of formal training of sign language in 2012. Outside of the UWI, a couple of dental schools have sign language courses, but none have a mandatory programme as the one at the UWI. Dentists the world over have had to rely on interpreters to sign with their deaf patients. The deaf in Jamaica have not appreciated the fact that dentists cannot sign and they have felt insulted and only go to the dentist in emergency situations. The mandatory inclusion of sign language in the Undergraduate Dental Programme curriculum at The University of the West Indies, Mona Campus, sought to establish a direct communication channel to formally bridge this gap. The programme of two sign language courses and a direct clinical competency requirement was developed during the second year of the first cohort of the newly introduced undergraduate dental programme through a collaborating partnership between two faculties on the Mona Campus. The programme was introduced in 2012 in the third year of the 5-year undergraduate dental programme. To date, two cohorts have completed the programme, and the preliminary findings from an ongoing clinical study have shown a positive impact on dental care access and dental treatment for deaf patients at the UWI Mona Dental Polyclinic. The development of a direct communication channel between dental students and the deaf that has led to increased dental
Kiernan, Julia; Meier, Joyce; Wang, Xiqiao
This collaborative project explores the affordances of a translation assignment in the context of a learner-centered pedagogy that places composition students' movement among languages and cultures as both a site for inquiry and subject of analysis. The translation assignment asks students to translate scholarly articles or culture stories from…
Andrei, Stefan; Osborne, Lawrence; Smith, Zanthia
The current learning process of Deaf or Hard of Hearing (D/HH) students taking Science, Technology, Engineering, and Mathematics (STEM) courses needs, in general, a sign interpreter for the translation of English text into American Sign Language (ASL) signs. This method is at best impractical due to the lack of availability of a specialized sign…
A. A. Karpov
Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.
Herman, Ros; Rowley, Katherine; Mason, Kathryn; Morgan, Gary
This study details the first ever investigation of narrative skills in a group of 17 deaf signing children who have been diagnosed with disorders in their British Sign Language development compared with a control group of 17 deaf child signers matched for age, gender, education, quantity, and quality of language exposure and non-verbal intelligence. Children were asked to generate a narrative based on events in a language free video. Narratives were analysed for global structure, information content and local level grammatical devices, especially verb morphology. The language-impaired group produced shorter, less structured and grammatically simpler narratives than controls, with verb morphology particularly impaired. Despite major differences in how sign and spoken languages are articulated, narrative is shown to be a reliable marker of language impairment across the modality boundaries. © 2014 Royal College of Speech and Language Therapists.
Sherman, Judy; Torres-Crespo, Marisel N.
Capitalizing on preschoolers' inherent enthusiasm and capacity for learning, the authors developed and implemented a dual-language program to enable young children to experience diversity and multiculturalism by learning two new languages: Spanish and American Sign Language. Details of the curriculum, findings, and strategies are shared.
In recent years, there has been a growing debate in the United States, Europe, and Australia about the nature of the Deaf community as a cultural community,1 and the recognition of signed languages as “real” or “legitimate” languages comparable in all meaningful ways to spoken languages. An important element of this ...
Knapp, Heather Patterson; Corina, David P
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). Behavioral and Brain Sciences, 28, 105-167; Arbib M.A. (2008). From grasp to language: Embodied concepts and the challenge of abstraction. Journal de Physiologie Paris 102, 4-20]. Signed languages of the deaf are fully-expressive, natural human languages that are perceived visually and produced manually. We suggest that if a unitary mirror neuron system mediates the observation and production of both language and non-linguistic action, three prediction can be made: (1) damage to the human mirror neuron system should non-selectively disrupt both sign language and non-linguistic action processing; (2) within the domain of sign language, a given mirror neuron locus should mediate both perception and production; and (3) the action-based tuning curves of individual mirror neurons should support the highly circumscribed set of motions that form the "vocabulary of action" for signed languages. In this review we evaluate data from the sign language and mirror neuron literatures and find that these predictions are only partially upheld. 2009 Elsevier Inc. All rights reserved.
Van Staden, Annalene
Full Text Available This article argues the importance of allowing deaf children to acquire sign language from an early age. It demonstrates firstly that the critical/sensitive period hypothesis for language acquisition can be applied to specific language aspects of spoken language as well as sign languages (i.e. phonology, grammatical processing and syntax. This makes early diagnosis and early intervention of crucial importance. Moreover, research findings presented in this article demonstrate the advantage that sign language offers in the early years of a deaf child’s life by comparing the language development milestones of deaf learners exposed to sign language from birth to those of late-signers, orally trained deaf learners and hearing learners exposed to spoken language. The controversy over the best medium of instruction for deaf learners is briefly discussed, with emphasis placed on the possible value of bilingual-bicultural programmes to facilitate the development of deaf learners’ literacy skills. Finally, this paper concludes with a discussion of the implications/recommendations of sign language teaching and Deaf education in South Africa.
Williams, Joshua T; Newman, Sharlene D
The roles of visual sonority and handshape markedness in sign language acquisition and production were investigated. In Experiment 1, learners were taught sign-nonobject correspondences that varied in sign movement sonority and handshape markedness. Results from a sign-picture matching task revealed that high sonority signs were more accurately matched, especially when the sign contained a marked handshape. In Experiment 2, learners produced these familiar signs in addition to novel signs, which differed based on sonority and markedness. Results from a key-release reaction time reproduction task showed that learners tended to produce high sonority signs much more quickly than low sonority signs, especially when the sign contained an unmarked handshape. This effect was only present in familiar signs. Sign production accuracy rates revealed that high sonority signs were more accurate than low sonority signs. Similarly, signs with unmarked handshapes were produced more accurately than those with marked handshapes. Together, results from Experiments 1 and 2 suggested that signs that contain high sonority movements are more easily processed, both perceptually and productively, and handshape markedness plays a differential role in perception and production. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
the application of fuzzy language in English advertisement is very broad, the application of fuzzy language can make advertising more attractive, so as to achieve the goal of advertising design companies.Paper discusses the application of fuzzy language and its translation, for the development of English advertising, creating a better path.
Full Text Available Early diagnosis and intervention are now recognized as undeniable rights of deaf and hard-of-hearing children and their families. The deaf child’s family must have the opportunity to socialize with deaf children and deaf adults. The deaf child’s family must also have access to all the information on the general development of their child, and to special information on hearing impairment, communication options and linguistic development of the deaf child.The critical period hypothesis for language acquisition proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. Individuals who learned sign language from birth performed better on linguistic and memory tasks than individuals who did not start learning sign language until after puberty. The old prejudice that the deaf child must learn the spoken language at a very young age, and that sign language can wait because it can be easily learned by any person at any age, cannot be maintained anymore.The cultural approach to deafness emphasizes three necessary components in the development of a deaf child: 1. stimulating early communication using natural sign language within the family and interacting with the Deaf community; 2. bilingual / bicultural education and 3. ensuring deaf persons’ rights to enjoy the services of high quality interpreters throughout their education from kindergarten to university. This new view of the phenomenology of deafness means that the environment needs to be changed in order to meet the deaf person’s needs, not the contrary.
This article explores three models of sustainability (environmental, economic, and social) and identifies characteristics of a sustainable community necessary to sustain the Deaf community as a whole. It is argued that sign language legislation is a valuable tool for achieving sustainability for the generations to come.
Tomita, Nozomi; Kozak, Viola
This paper focuses on two selected phonological patterns that appear unique to Saudi Arabian Sign Language (SASL). For both sections of this paper, the overall methodology is the same as that discussed in Stephen and Mathur (this volume), with some additional modifications tailored to the specific studies discussed here, which will be expanded…
Manrique Cordeje, M.E.
How does (mis)understanding works in conversation? Problems of understanding occur all the time in our everyday social life. How does miscommunication happen and how do we deal with it? This thesis reports on how sign language users manage to understand each other based on a large Conversational
Baker-Ramos, Leslie K.
The purpose of this teacher inquiry is to explore the effects of signing and gesturing on the expressive language development of non-verbal children. The first phase of my inquiry begins with the observations of several non-verbal students with various etiologies in three different educational settings. The focus of these observations is to…
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
This dissertation investigates the expression of spatial relationships in German Sign Language (Deutsche Gebärdensprache, DGS). The analysis focuses on linguistic expression in the spatial domain in two types of discourse: static scene description (location) and event narratives (location and
Hammer, A.; van den Bogaerde, B.; Cirillo, L.; Niemants, N.
We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based predominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the
Annemiek Hammer; Dr. Beppie van den Bogaerde
We present a description of our didactic approach to train undergraduate sign language interpreters on their interpersonal and reflective skills. Based pre-dominantly on the theory of role-space by Llewellyn-Jones and Lee (2014), we argue that dialogue settings require a dynamic role of the
Ana B. Fernández-Guerra
Full Text Available Several scholars have argued that translation is not a useful tool when acquiring a second or foreign language; since it provides a simplistic one-to-one relationship between the native and the foreign language, it can cause interference between them, and it is an artificial exercise that has nothing to do in a communicative approach to language teaching. Recent studies, however, show that, far from being useless, translation can be a great aid to foreign language learning. The aim of the present paper is twofold: (1 to summarize and assess the arguments that encourage the use of translation in the foreign language classroom, supporting the integration of several forms of translating; and (2 to present the results of a survey that focused on students’ perceptions and responses towards translation tasks and their effectiveness in foreign language acquisition. Results show that students’ attitudes were surprisingly positive for several reasons: translation is one of their preferred language learning tasks, it is motivating, it facilitates a deeper understanding of the form and content of the source language text, it increases learners’ awareness of the differences between both linguistic systems, it allows them to re-express their thoughts faster and easier, and it helps them acquire linguistic and cultural knowledge.
Instructors in 5 American Sign Language--English Interpreter Programs and 4 Deaf Studies Programs in Canada were interviewed and asked to discuss their experiences as educators. Within a qualitative research paradigm, their comments were grouped into a number of categories tied to the social construction of American Sign Language--English interpreters, such as learners' age and education and the characteristics of good citizens within the Deaf community. According to the participants, younger students were adept at language acquisition, whereas older learners more readily understood the purpose of lessons. Children of deaf adults were seen as more culturally aware. The participants' beliefs echoed the theories of P. Freire (1970/1970) that educators consider the reality of each student and their praxis and were responsible for facilitating student self-awareness. Important characteristics in the social construction of students included independence, an appropriate attitude, an understanding of Deaf culture, ethical behavior, community involvement, and a willingness to pursue lifelong learning.
Sündüz ÖZTÜRK KASAR
Full Text Available Among the literary genres, poetry is the one that resists translation the most. Creating a new and innovative language that breaks the usual rules of the standard language with brand-new uses and meanings is probably one of the most important goals of the poet. Poetry challenges the translator to capture not only original images, exceptional symbolism, and subjective connotations but also its musicality, rhythm, and measure. Faced with this revolutionary use of language, the translator needs a guide so as to not get lost in the labyrinths of the poetic universe. The universe of sound and meaning unique to each language and the incompatibility of these languages with each other makes the duty of the translator seem impossible. At this point, semiotics may function as a guide, opening up the mysteries of the universe built by the poet and giving clues as to how it can be conveyed in the target language. This allows us to suggest the cooperation of semiotics and translation. From this perspective, we aim to present a case study that exemplifies this cooperation. Our corpus comprises Shakespeare’s sonnet 130 and its Turkish and French translations. The study treats the translator as the receiver of the source text and the producer of the target text in the light of the Theory of Instances of Enunciation propounded by Jean-Claude Coquet. Further, through the Systematics of Designificative Tendencies propounded by Sündüz Öztürk Kasar, the study compares the translators’ creations to the original sonnet to see the extent to which the balance of the original text’s meaning and form is preserved in the translations and how skillfully and competently the signs that constitute the universe of meaning are transmitted in the target languages.
Meade, Gabriela; Midgley, Katherine J; Sevcikova Sehyr, Zed; Holcomb, Phillip J; Emmorey, Karen
In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may not suppress the non-target language as robustly. Further, there was a subset of bilinguals who were aware of the ASL manipulation (determined by debrief), and they exhibited an effect of ASL phonology in a later time window (700-900ms). Overall, these results indicate modality-independent language co-activation that persists longer for bimodal bilinguals. Copyright © 2017 Elsevier Inc. All rights reserved.
Ortega, Gerardo; Morgan, Gary
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that…
Holmer, Emil; Heimann, Mikael; Rudner, Mary
Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further,…
Full Text Available In signed and spoken language sentences, imperative mood and the corresponding speech acts such as for instance, command, permission or advice, can be distinguished by morphosyntactic structures, but also solely by prosodic cues, which are the focus of this paper. These cues can express paralinguistic mental states or grammatical meaning, and we show that in American Sign Language (ASL, they also exhibit the function, scope, and alignment of prosodic, linguistic elements of sign languages. The production and comprehension of prosodic facial expressions and temporal patterns therefore can shed light on how cues are grammaticalized in sign languages. They can also be informative about the formal semantic and pragmatic properties of imperative types not only in ASL, but also more broadly. This paper includes three studies: one of production (Study 1 and two of comprehension (Studies 2 and 3. In Study 1, six prosodic cues are analyzed in production: temporal cues of sign and hold duration, and non-manual cues including tilts of the head, head nods, widening of the eyes, and presence of mouthings. Results of Study 1 show that neutral sentences and commands are well distinguished from each other and from other imperative speech acts via these prosodic cues alone; there is more limited differentiation among explanation, permission, and advice. The comprehension of these five speech acts is investigated in Deaf ASL signers in Study 2, and in three additional groups in Study 3: Deaf signers of German Sign Language (DGS, hearing non-signers from the United States, and hearing non-signers from Germany. Results of Studies 2 and 3 show that the ASL group performs significantly better than the other 3 groups and that all groups perform above chance for all meaning types in comprehension. Language-specific knowledge, therefore, has a significant effect on identifying imperatives based on targeted cues. Command has the most cues associated with it and is the
Bosworth, Rain G.; Emmorey, Karen
Iconicity is a property that pervades the lexicon of many sign languages, including American Sign Language (ASL). Iconic signs exhibit a motivated, nonarbitrary mapping between the form of the sign and its meaning. We investigated whether iconicity enhances semantic priming effects for ASL and whether iconic signs are recognized more quickly than…
Senghas, A; Coppola, M
It has long been postulated that language is not purely learned, but arises from an interaction between environmental exposure and innate abilities. The innate component becomes more evident in rare situations in which the environment is markedly impoverished. The present study investigated the language production of a generation of deaf Nicaraguans who had not been exposed to a developed language. We examined the changing use of early linguistic structures (specifically, spatial modulations) in a sign language that has emerged since the Nicaraguan group first came together: In tinder two decades, sequential cohorts of learners systematized the grammar of this new sign language. We examined whether the systematicity being added to the language stems from children or adults: our results indicate that such changes originate in children aged 10 and younger Thus, sequential cohorts of interacting young children collectively: possess the capacity not only to learn, but also to create, language.
The objective of this effort was to develop a prototype, hand-held or body-mounted spoken language translator to assist military and law enforcement personnel in interacting with non-English-speaking people...
This meeting aims at providing an overview of the current theory and practice, exploring new directions and emerging trends, sharing good practice, and exchanging information regarding foreign languages, applied linguistics and translation. The meeting is an excellent opportunity for the presentation of current or previous language learning and translation projects funded by the European Commission or by other sources. Proposals are invited for one or more of the following topics, in any of t...
Bailes, Cynthia Neese; Erting, Lynne C.; Thumann-Prezioso, Carlene; Erting, Carol J.
This longitudinal case study examined the language and literacy acquisition of a Deaf child as mediated by her signing Deaf parents during her first three years of life. Results indicate that the parents' interactions with their child were guided by linguistic and cultural knowledge that produced an intuitive use of child-directed signing (CDSi)…
Hoemann, Harry W.; Kreske, Catherine M.
Describes a study that found, contrary to previous reports, that a strong, symmetrical release from proactive interference (PI) is the normal outcome for switches between American Sign Language (ASL) signs and English words and with switches between Manual and English alphabet characters. Subjects were college students enrolled in their first ASL…
Full Text Available Current views about language are dominated by the idea of arbitrary connections between linguistic form and meaning. However, if we look beyond the more familiar Indo-European languages and also include both spoken and signed language modalities, we find that motivated, iconic form-meaning mappings are, in fact, pervasive in language. In this paper, we review the different types of iconic mappings that characterize languages in both modalities, including the predominantly visually iconic mappings in signed languages. Having shown that iconic mapping are present across languages, we then proceed to review evidence showing that language users (signers and speakers exploit iconicity in language processing and language acquisition. While not discounting the presence and importance of arbitrariness in language, we put forward the idea that iconicity need also be recognized as a general property of language, which may serve the function of reducing the gap between linguistic form and conceptual representation to allow the language system to hook up to motor and perceptual experience.
Axelsen, Holger Bock
We describe the translation techniques used for the code generation in a compiler from the high-level reversible imperative programming language Janus to the low-level reversible assembly language PISA. Our translation is both semantics preserving (correct), in that target programs compute exactly...... the same functions as their source programs (cleanly, with no extraneous garbage output), and efficient, in that target programs conserve the complexities of source programs. In particular, target programs only require a constant amount of temporary garbage space. The given translation methods are generic......, and should be applicable to any (imperative) reversible source language described with reversible flowcharts and reversible updates. To our knowledge, this is the first compiler between reversible languages where the source and target languages were independently developed; the first exhibiting both...
Belenkova, Nataliya; Davtyan, Victoria
International cooperation in all professional settings makes translation a very important tool of interpersonal and professional relations of specialists in different domains. Training of undergraduates and graduates' translation skills in a special setting is included in the curriculum of non-linguistic higher education institutions and studied…
احمدی ، شیخی احمدی ، شیخی
Like sentences, combination of words is the main part of syntax in Persian language as well as in Russian language and plays an important role in sentence structures of these languages. Combination of words in Russian language is divided into three categories: verbal combinations, nominal and adverbial combinations. The writers of this article have studied combination of verbs in Russian language and their translation in Persian language.
Batterbury, Sarah C. E.
Sign Language Peoples (SLPs) across the world have developed their own languages and visuo-gestural-tactile cultures embodying their collective sense of Deafhood (Ladd 2003). Despite this, most nation-states treat their respective SLPs as disabled individuals, favoring disability benefits, cochlear implants, and mainstream education over language…
As this passage suggests, there is extensive and growing literature, both in .... For instance, sign language mediates experience in a unique way, as of ..... entail Deaf students studying together, in a setting not unlike that provided by residential .... of ASL as a foreign language option in secondary schools and universities.
L. Leeson; Dr. Beppie van den Bogaerde; Tobias Haug; C. Rathmann
This resource establishes European standards for sign languages for professional purposes in line with the Common European Framework of Reference for Languages (CEFR) and provides an overview of assessment descriptors and approaches. Drawing on preliminary work undertaken in adapting the CEFR to
My point of departure for this paper is that translation, so long neglected in foreign language teaching, can not only improve students' linguistic competences in both a foreign language and their mother tongue, but also their awareness of cultural and intercultural elements. It is a widespread popular assumption, among those not involved in…
Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan
The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth.
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
This research explored the use of dynamic assessment (DA) for language-learning abilities in signing deaf children from deaf and hearing families. Thirty-seven deaf children, aged 6 to 11 years, were identified as either stronger (n = 26) or weaker (n = 11) language learners according to teacher or speech-language pathologist report. All children received 2 scripted, mediated learning experience sessions targeting vocabulary knowledge—specifically, the use of semantic categories that were carried out in American Sign Language. Participant responses to learning were measured in terms of an index of child modifiability. This index was determined separately at the end of the 2 individual sessions. It combined ratings reflecting each child's learning abilities and responses to mediation, including social-emotional behavior, cognitive arousal, and cognitive elaboration. Group results showed that modifiability ratings were significantly better for stronger language learners than for weaker language learners. The strongest predictors of language ability were cognitive arousal and cognitive elaboration. Mediator ratings of child modifiability (i.e., combined score of social-emotional factors and cognitive factors) are highly sensitive to language-learning abilities in deaf children who use sign language as their primary mode of communication. This method can be used to design targeted interventions.
Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.
Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…
Cardin, Velia; Orfanidou, Eleni; Kästner, Lena; Rönnberg, Jerker; Woll, Bencie; Capek, Cheryl M; Rudner, Mary
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production.
Kristoffersen, Jette Hedegaard; Boye Niemela, Janne
The Danish Sign Language dictionary project aims at creating an electronic dictionary of the basic vocabulary of Danish Sign Language. One of many issues in compiling the dictionary has been to analyse the status of mouth patterns in Danish Sign Language and, consequently, to decide at which level...
Sign language test development is a relatively new field within sign linguistics, motivated by the practical need for assessment instruments to evaluate language development in different groups of learners (L1, L2). Due to the lack of research on the structure and acquisition of many sign languages, developing an assessment instrument poses…
Marshall, C. R.; Jones, A.; Fastelli, A.; Atkinson, J.; Botting, N.; Morgan, G.
Background: Deafness has an adverse impact on children's ability to acquire spoken languages. Signed languages offer a more accessible input for deaf children, but because the vast majority are born to hearing parents who do not sign, their early exposure to sign language is limited. Deaf children as a whole are therefore at high risk of language…
Napier, Jemina; Major, George; Ferrara, Lindsay; Johnston, Trevor
This paper reviews a sign language planning project conducted in Australia with deaf Auslan users. The Medical Signbank project utilised a cooperative language planning process to engage with the Deaf community and sign language interpreters to develop an online interactive resource of health-related signs, in order to address a gap in the health…
Isaacs, Gerald L.
A study was made of several dialects of the Beginner's All-purpose Symbolic Instruction Code (BASIC). The purpose was to determine if it was possible to identify a set of interactive BASIC dialects in which translatability between different members of the set would be high, if reasonable programing restrictions were imposed. It was first…
Fuentes, Mariana; Massone, Maria Ignacia; Fernandez-Viader, Maria del Pilar; Makotrinsky, Alejandro; Pulgarin, Francisca
Numeral-incorporating roots in the numeral systems of Argentine Sign Language (LSA) and Catalan Sign Language (LSC), as well as the main features of the number systems of both languages, are described and compared. Informants discussed the use of numerals and roots in both languages (in most cases in natural contexts). Ten informants took part in…
Full Text Available example, between a deaf person who can sign and an able person or a person with a different disability who cannot sign). METHODOLOGY A signing avatar is set up to work together with a chatterbot. The chatterbot is a natural language dialogue interface... are then offered in sign language as the replies are interpreted by a signing avatar, a living character that can reproduce human-like gestures and expressions. To make South African Sign Language (SASL) available digitally, computational models of the language...
Stamp, Rose; Schembri, Adam; Fenlon, Jordan; Rentelis, Ramas; Woll, Bencie; Cormier, Kearsy
This paper presents results from a corpus-based study investigating lexical variation in BSL. An earlier study investigating variation in BSL numeral signs found that younger signers were using a decreasing variety of regionally distinct variants, suggesting that levelling may be taking place. Here, we report findings from a larger investigation looking at regional lexical variants for colours, countries, numbers and UK placenames elicited as part of the BSL Corpus Project. Age, school location and language background were significant predictors of lexical variation, with younger signers using a more levelled variety. This change appears to be happening faster in particular sub-groups of the deaf community (e.g., signers from hearing families). Also, we find that for the names of some UK cities, signers from outside the region use a different sign than those who live in the region. PMID:24759673
Samir Abou El-Seoud
Full Text Available A handheld device system, such as cellular phone or a PDA, can be used in acquiring Sign Language (SL. The developed system uses graphic applications. The user uses the graphical system to view and to acquire knowledge about sign grammar and syntax based on the local vernacular particular to the country. This paper explores and exploits the possibility of the development of a mobile system to help the deaf and other people to communicate and learn using handheld devices. The pedagogical assessment of the prototype application that uses a recognition-based interface e.g., images and videos, gave evidence that the mobile application is memorable and learnable. Additionally, considering primary and recency effects in the interface design will improve memorability and learnability.
Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.
Fitzpatrick, Elizabeth M; Hamel, Candyce; Stevens, Adrienne; Pratt, Misty; Moher, David; Doucet, Suzanne P; Neuss, Deirdre; Bernstein, Anita; Na, Eunjung
Permanent hearing loss affects 1 to 3 per 1000 children and interferes with typical communication development. Early detection through newborn hearing screening and hearing technology provide most children with the option of spoken language acquisition. However, no consensus exists on optimal interventions for spoken language development. To conduct a systematic review of the effectiveness of early sign and oral language intervention compared with oral language intervention only for children with permanent hearing loss. An a priori protocol was developed. Electronic databases (eg, Medline, Embase, CINAHL) from 1995 to June 2013 and gray literature sources were searched. Studies in English and French were included. Two reviewers screened potentially relevant articles. Outcomes of interest were measures of auditory, vocabulary, language, and speech production skills. All data collection and risk of bias assessments were completed and then verified by a second person. Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) was used to judge the strength of evidence. Eleven cohort studies met inclusion criteria, of which 8 included only children with severe to profound hearing loss with cochlear implants. Language development was the most frequently reported outcome. Other reported outcomes included speech and speech perception. Several measures and metrics were reported across studies, and descriptions of interventions were sometimes unclear. Very limited, and hence insufficient, high-quality evidence exists to determine whether sign language in combination with oral language is more effective than oral language therapy alone. More research is needed to supplement the evidence base. Copyright © 2016 by the American Academy of Pediatrics.
Full Text Available /detail.shtml?i=41 Eberius, Wolfram (2008): Multimodale Erwiterung Und Distribution Von Digital Talking Books. Germany: Technische universität Dresden. Fédération Internationale de Football Association (2008): Laws of the Game 2008/2009. Switzerland: FIFA... are further discussed that will influence the design of future DAISY standards. 2.1 Creation of Sign Language Content To create a full-text/full-audio and full-text/full-video DAISY test-book, the original content of “Laws of the Game 2008/2009” (FIFA...
Full Text Available Motivated by a paradoxical corollary of ambiguities in legal documents and especially in contract texts, the current paper underpins a dichotomy approach to unintended ambiguities aiming to establish a referential framework for the occurrence rate of translation ambiguities within the legal language nomenclature. The research focus is on a twofold situation since ambiguities may. on the one hand, arise dining the translation process, generated by the translator’s lack of competence, i.e. inadequate use of English regarding the special nature of legal language, or. on the other hand, they may be simply transferred from the source language into the target language without even noticing the potential ambiguous situation, i.e. culture-bound ambiguities. Hence, the paper proposes a contrastive analysis in order to localize the occurrence of lexical, structural, and socio-cultural ambiguities triggered by the use of the term performance and its Romanian equivalents in a number of sales contracts.
Despite the current need for reliable and valid test instruments in different countries in order to monitor the sign language acquisition of deaf children, very few tests are commercially available that offer strong evidence for their psychometric properties. This mirrors the current state of affairs for many sign languages, where very little…
Ramapriyan, H. K.; Young, K.
Translator program (OCCULT) assists non-computer-oriented users in setting up and submitting jobs for complex ORSER system. ORSER is collection of image processing programs for analyzing remotely sensed data. OCCULT is designed for those who would like to use ORSER but cannot justify acquiring and maintaining necessary proficiency in Remote Job Entry Language, Job Control Language, and control-card formats. OCCULT is written in FORTRAN IV and OS Assembler for interactive execution.
McKee, Michael M; Winters, Paul C; Sen, Ananda; Zazove, Philip; Fiscella, Kevin
Deaf American Sign Language (ASL) users comprise a linguistic minority population with poor health care access due to communication barriers and low health literacy. Potentially, these health care barriers could increase Emergency Department (ED) use. To compare ED use between deaf and non-deaf patients. A retrospective cohort from medical records. The sample was derived from 400 randomly selected charts (200 deaf ASL users and 200 hearing English speakers) from an outpatient primary care health center with a high volume of deaf patients. Abstracted data included patient demographics, insurance, health behavior, and ED use in the past 36 months. Deaf patients were more likely to be never smokers and be insured through Medicaid. In an adjusted analysis, deaf individuals were significantly more likely to use the ED (odds ratio [OR], 1.97; 95% confidence interval [CI], 1.11-3.51) over the prior 36 months. Deaf American Sign Language users appear to be at greater odds for elevated ED utilization when compared to the general hearing population. Efforts to further understand the drivers for increased ED utilization among deaf ASL users are much needed. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
Health care providers commonly discuss depressive symptoms with clients, enabling earlier intervention. Such discussions rarely occur between providers and Deaf clients. Most culturally Deaf adults experience early-onset hearing loss, self-identify as part of a unique culture, and communicate in the visual language of American Sign Language (ASL). Communication barriers abound, and depression screening instruments may be unreliable. To train and use ASL interpreters for a qualitative study describing depressive symptoms among Deaf adults. Training included research versus community interpreting. During data collection, interpreters translated to and from voiced English and ASL. Training eliminated potential problems during data collection. Unexpected issues included participants asking for "my interpreter" and worrying about confidentiality or friendship in a small community. Lessons learned included the value of careful training of interpreters prior to initiating data collection, including resolution of possible role conflicts and ensuring conceptual equivalence in real-time interpreting.
Montse Corrius Gimbert
Full Text Available If the process of translating is not at all simple, the process of translating an audiovisual text is still more complex. Apart rom technical problems such as lip synchronisation, there are other factors to be considered such as the use of the language and textual structures deemed appropriate to the channel of communication. Bearing in mind that most of the films we are continually seeing on our screens were and are produced in the United States, there is an increasing need to translate them into the different languages of the world. But sometimes the source audiovisual text contains more than one language, and, thus, a new problem arises: the ranslators face additional difficulties in translating this “third language” (language or dialect into the corresponding target culture. There are many films containing two languages in the original version but in this paper we will focus mainly on three films: Butch Cassidy and the Sundance Kid (1969, Raid on Rommel (1999 and Blade Runner (1982. This paper aims at briefly illustrating different solutions which may be applied when we come across a “third language”.
Xue, Zhe; Song, Guan-Yang; Liu, Xin; Zhang, Hui; Wu, Guan; Qian, Yi; Feng, Hua
The purpose of the study was to quantify the patellar J sign using traditional computed tomography (CT) scans. Fifty-three patients (fifty-three knees) who suffered from recurrent patellar instability were included and analyzed. The patellar J sign was evaluated pre-operatively during active knee flexion and extension. It was defined as positive when there was obvious lateral patellar translation, and negative when there was not. The CT scans were performed in all patients with full knee extension; and the parameters including bisect offset index (BOI), patellar-trochlear-groove (PTG) distance, and patellar lateral tilt angle (PLTA) were measured on the axial slices. All the three parameters were compared between the J sign-positive group (study group) and the J sign-negative group (control group). In addition, the optimal thresholds of the three CT scan parameters for predicting the positive patellar J sign were determined with receiver operating characteristic (ROC) curves, and the diagnostic values were assessed by the area under the curve (AUC). Among the fifty-three patients (fifty-three knees), thirty-seven (70%) showed obvious lateral patellar translation, which were defined as positive J sign (study group), and the remaining sixteen (30%) who showed no lateral translation were defined as negative J sign (control group). The mean values of the three CT parameters in the study group were all significantly larger compared to the control group, including BOI (121 ± 28% vs 88 ± 12%, P = 0.038), PTG distance (5.2 ± 6.6 mm vs - 4.4 ± 5.2 mm, P J sign was 97.5% (Sensitivity = 83.3%, Specificity = 87.5%). In this study, the prevalence of positive patellar J sign was 70%. The BOI measured from the axial CT scans of the knee joint can be used as an appropriate predictor to differentiate the positive J sign from the negative J sign, highlighting that the excessive lateral patellar translation on axial CT scan indicates positive
Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
The present framework is developed under contract with the Smarter Balanced Assessment Consortium (SBAC) as a conceptual and methodological tool for guiding the reasonings and actions of contractors in charge of developing and providing test translation accommodations for English language learners. The framework addresses important challenges in…
Gordin, Michael D
Using the cases of three Russian chemistry textbooks from the 1860s--authored by Freidrich Beilstein, A. M. Butlerov, and D. I. Mendeleev--this essay analyzes their contemporary translation into German and the implications of their divergent histories for scholars' understanding of the processes of credit accrual and the choices of languages of science.
Carter, S.; Monz, C.
This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical
Visser-Bochane, Margot I.; Gerrits, Ellen; van der Schans, Cees P.; Reijneveld, Sijmen A.; Luinge, Margreet R.
Background: Atypical speech and language development is one of the most common developmental difficulties in young children. However, which clinical signs characterize atypical speech-language development at what age is not clear. Aim: To achieve a national and valid consensus on clinical signs and red flags (i.e. most urgent clinical signs) for…
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit iconicity (the transparent relationship between meaning and form) to a greater extent than spoken languages. where it is largely limited to onomatopoeia. In a picture-sign matching experiment measuring reaction times, the authors examined the potential advantage of iconicity both for 1st- and 2nd-language learners of…
Guimarães, Cayley; Oliveira Machado, Milton César; Fernandes, Sueli F.
Deaf people use Sign Language (SL) for intellectual development, communications and other human activities that are mediated by language--such as the expression of complex and abstract thoughts and feelings; and for literature, culture and knowledge. The Brazilian Sign Language (Libras) is a complete linguistic system of visual-spatial manner,…
Lin, Christina Mien-Chun; Gerner de Garcia, Barbara; Chen-Pichler, Deborah
There are over 100 languages in China, including Chinese Sign Language. Given the large population and geographical dispersion of the country's deaf community, sign variation is to be expected. Language barriers due to lexical variation may exist for deaf college students in China, who often live outside their home regions. In presenting an…
Full Text Available This paper reports the design and analysis of an American Sign Language (ASL alphabet translation system implemented in hardware using a Field-Programmable Gate Array. The system process consists of three stages, the first being the communication with the neuromorphic camera (also called Dynamic Vision Sensor, DVS sensor using the Universal Serial Bus protocol. The feature extraction of the events generated by the DVS is the second part of the process, consisting of a presentation of the digital image processing algorithms developed in software, which aim to reduce redundant information and prepare the data for the third stage. The last stage of the system process is the classification of the ASL alphabet, achieved with a single artificial neural network implemented in digital hardware for higher speed. The overall result is the development of a classification system using the ASL signs contour, fully implemented in a reconfigurable device. The experimental results consist of a comparative analysis of the recognition rate among the alphabet signs using the neuromorphic camera in order to prove the proper operation of the digital image processing algorithms. In the experiments performed with 720 samples of 24 signs, a recognition accuracy of 79.58% was obtained.
Al-khulaidi, Rami Ali; Akmeliawati, Rini
The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.
Padden, Carol; Hwang, So-One; Lepic, Ryan; Seegers, Sharon
When naming certain hand-held, man-made tools, American Sign Language (ASL) signers exhibit either of two iconic strategies: a handling strategy, where the hands show holding or grasping an imagined object in action, or an instrument strategy, where the hands represent the shape or a dimension of the object in a typical action. The same strategies are also observed in the gestures of hearing nonsigners identifying pictures of the same set of tools. In this paper, we compare spontaneously created gestures from hearing nonsigning participants to commonly used lexical signs in ASL. Signers and gesturers were asked to respond to pictures of tools and to video vignettes of actions involving the same tools. Nonsigning gesturers overwhelmingly prefer the handling strategy for both the Picture and Video conditions. Nevertheless, they use more instrument forms when identifying tools in pictures, and more handling forms when identifying actions with tools. We found that ASL signers generally favor the instrument strategy when naming tools, but when describing tools being used by an actor, they are significantly more likely to use more handling forms. The finding that both gesturers and signers are more likely to alternate strategies when the stimuli are pictures or video suggests a common cognitive basis for differentiating objects from actions. Furthermore, the presence of a systematic handling/instrument iconic pattern in a sign language demonstrates that a conventionalized sign language exploits the distinction for grammatical purpose, to distinguish nouns and verbs related to tool use. Copyright © 2014 Cognitive Science Society, Inc.
Caselli, Naomi K; Pyers, Jennie E
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children's acquisition of new words, spoken or signed. We asked whether iconicity's prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children's productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines
One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The aim of the present study was to characterize effects of learning a sign language on the processing of a spoken language. Specifically, audiovisual phoneme comprehension was assessed before and after 13 weeks of sign language exposure. L2 ASL learners performed this task in the fMRI scanner. Results indicated that L2 American Sign Language (ASL) learners' behavioral classification of the speech sounds improved with time compared to hearing nonsigners. Results indicated increased activation in the supramarginal gyrus (SMG) after sign language exposure, which suggests concomitant increased phonological processing of speech. A multiple regression analysis indicated that learner's rating on co-sign speech use and lipreading ability was correlated with SMG activation. This pattern of results indicates that the increased use of mouthing and possibly lipreading during sign language acquisition may concurrently improve audiovisual speech processing in budding hearing bimodal bilinguals. Copyright © 2015 Elsevier B.V. All rights reserved.
Young, Alys; Oram, Rosemary; Dodds, Claire; Nassimi-Green, Catherine; Belk, Rachel; Rogers, Katherine; Davies, Linda; Lovell, Karina
Internationally, few clinical trials have involved Deaf people who use a signed language and none have involved BSL (British Sign Language) users. Appropriate terminology in BSL for key concepts in clinical trials that are relevant to recruitment and participant information materials, to support informed consent, do not exist. Barriers to conceptual understanding of trial participation and sources of misunderstanding relevant to the Deaf community are undocumented. A qualitative, community participatory exploration of trial terminology including conceptual understanding of 'randomisation', 'trial', 'informed choice' and 'consent' was facilitated in BSL involving 19 participants in five focus groups. Data were video-recorded and analysed in source language (BSL) using a phenomenological approach. Six necessary conditions for developing trial information to support comprehension were identified. These included: developing appropriate expressions and terminology from a community basis, rather than testing out previously derived translations from a different language; paying attention to language-specific features which support best means of expression (in the case of BSL expectations of specificity, verb directionality, handshape); bilingual influences on comprehension; deliberate orientation of information to avoid misunderstanding not just to promote accessibility; sensitivity to barriers to discussion about intelligibility of information that are cultural and social in origin, rather than linguistic; the importance of using contemporary language-in-use, rather than jargon-free or plain language, to support meaningful understanding. The study reinforces the ethical imperative to ensure trial participants who are Deaf are provided with optimum resources to understand the implications of participation and to make an informed choice. Results are relevant to the development of trial information in other signed languages as well as in spoken/written languages when
Witko, Joanne; Boyles, Pauline; Smiler, Kirsten; McKee, Rachel
The research described was undertaken as part of a Sub-Regional Disability Strategy 2017-2022 across the Wairarapa, Hutt Valley and Capital and Coast District Health Boards (DHBs). The aim was to investigate deaf New Zealand Sign Language (NZSL) users' quality of access to health services. Findings have formed the basis for developing a 'NZSL plan' for DHBs in the Wellington sub-region. Qualitative data was collected from 56 deaf participants and family members about their experiences of healthcare services via focus group, individual interviews and online survey, which were thematically analysed. Contextual perspective was gained from 57 healthcare professionals at five meetings. Two professionals were interviewed, and 65 staff responded to an online survey. A deaf steering group co-designed the framework and methods, and validated findings. Key issues reported across the health system include: inconsistent interpreter provision; lack of informed consent for treatment via communication in NZSL; limited access to general health information in NZSL and the reduced ability of deaf patients to understand and comply with treatment options. This problematic communication with NZSL users echoes international evidence and other documented local evidence for patients with limited English proficiency. Deaf NZSL users face multiple barriers to equitable healthcare, stemming from linguistic and educational factors and inaccessible service delivery. These need to be addressed through policy and training for healthcare personnel that enable effective systemic responses to NZSL users. Deaf participants emphasise that recognition of their identity as members of a language community is central to improving their healthcare experiences.
Olive, Joseph P; McCary, John
This comprehensive handbook, written by leading experts in the field, details the groundbreaking research conducted under the breakthrough GALE program - The Global Autonomous Language Exploitation within the Defense Advanced Research Projects Agency (DARPA), while placing it in the context of previous research in the fields of natural language and signal processing, artificial intelligence and machine translation. The most fundamental contrast between GALE and its predecessor programs was its holistic integration of previously separate or sequential processes. In earlier language research pro
Yusa, Noriaki; Kim, Jungho; Koizumi, Masatoshi; Sugiura, Motoaki; Kawashima, Ryuta
Children naturally acquire a language in social contexts where they interact with their caregivers. Indeed, research shows that social interaction facilitates lexical and phonological development at the early stages of child language acquisition. It is not clear, however, whether the relationship between social interaction and learning applies to adult second language acquisition of syntactic rules. Does learning second language syntactic rules through social interactions with a native speaker or without such interactions impact behavior and the brain? The current study aims to answer this question. Adult Japanese participants learned a new foreign language, Japanese sign language (JSL), either through a native deaf signer or via DVDs. Neural correlates of acquiring new linguistic knowledge were investigated using functional magnetic resonance imaging (fMRI). The participants in each group were indistinguishable in terms of their behavioral data after the instruction. The fMRI data, however, revealed significant differences in the neural activities between two groups. Significant activations in the left inferior frontal gyrus (IFG) were found for the participants who learned JSL through interactions with the native signer. In contrast, no cortical activation change in the left IFG was found for the group who experienced the same visual input for the same duration via the DVD presentation. Given that the left IFG is involved in the syntactic processing of language, spoken or signed, learning through social interactions resulted in an fMRI signature typical of native speakers: activation of the left IFG. Thus, broadly speaking, availability of communicative interaction is necessary for second language acquisition and this results in observed changes in the brain.
Cushman, R.M.; Burtis, M.D.
This report contains English-translated abstracts of important Chinese-language literature concerning global climate change for the years 1995-1998. This body of literature includes the topics of adaptation, ancient climate change, climate variation, the East Asia monsoon, historical climate change, impacts, modeling, and radiation and trace-gas emissions. In addition to the biological citations and abstracts translated into English, this report presents the original citations and abstracts in Chinese. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
Emmorey, Karen; McCullough, Stephen; Mehta, Sonya; Grabowski, Thomas J.
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language
Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott
There is no evidence that learning a natural human language is cognitively harmful to children. To the contrary, multilingualism has been argued to be beneficial to all. Nevertheless, many professionals advise the parents of deaf children that their children should not learn a sign language during their early years, despite strong evidence across many research disciplines that sign languages are natural human languages. Their recommendations are based on a combination of misperceptions about (1) the difficulty of learning a sign language, (2) the effects of bilingualism, and particularly bimodalism, (3) the bona fide status of languages that lack a written form, (4) the effects of a sign language on acquiring literacy, (5) the ability of technologies to address the needs of deaf children and (6) the effects that use of a sign language will have on family cohesion. We expose these misperceptions as based in prejudice and urge institutions involved in educating professionals concerned with the healthcare, raising and educating of deaf children to include appropriate information about first language acquisition and the importance of a sign language for deaf children. We further urge such professionals to advise the parents of deaf children properly, which means to strongly advise the introduction of a sign language as soon as hearing loss is detected. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Maarif, H. A.; Akmeliawati, R.; Gunawan, T. S.; Shafie, A. A.
Sign language synthesizer is a method to visualize the sign language movement from the spoken language. The sign language (SL) is one of means used by HSI people to communicate to normal people. But, unfortunately the number of people, including the HSI people, who are familiar with sign language is very limited. These cause difficulties in the communication between the normal people and the HSI people. The sign language is not only hand movement but also the face expression. Those two elements have complimentary aspect each other. The hand movement will show the meaning of each signing and the face expression will show the emotion of a person. Generally, Sign language synthesizer will recognize the spoken language by using speech recognition, the grammatical process will involve context free grammar, and 3D synthesizer will take part by involving recorded avatar. This paper will analyze and compare the existing techniques of developing a sign language synthesizer, which leads to IIUM Sign Language Synthesizer.
R, Elakkiya; K, Selvamani
Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.
Kehnig, Kh.; Lehttsch, Yu.; Nefed'eva, L.S.; Shtiller, G.
Approaches to the creation of specialized languages and their translators are given. The DENOT system for developing translators from various specialized languages is described. Use of the system was made to translate the STS (spectra treatment system) language into the FORTRAN language.The language of STS was realized with help of DEPOT on the BESM-6 computer. The DEROT system installed at various computer provides for simple and rapid transition from one computer to the other
Full Text Available The American Sign Language Sentence Reproduction Test (ASL-SRT requires the precise reproduction of a series of ASL sentences increasing in complexity and length. Error analyses of such tasks provides insight into working memory and scaffolding processes. Data was collected from three groups expected to differ in fluency: deaf children, deaf adults and hearing adults, all users of ASL. Quantitative (correct/incorrect recall and qualitative error analyses were performed. Percent correct on the reproduction task supports its sensitivity to fluency as test performance clearly differed across the three groups studied. A linguistic analysis of errors further documented differing strategies and bias across groups. Subjects’ recall projected the affordance and constraints of deep linguistic representations to differing degrees, with subjects resorting to alternate processing strategies in the absence of linguistic knowledge. A qualitative error analysis allows us to capture generalizations about the relationship between error pattern and the cognitive scaffolding, which governs the sentence reproduction process. Highly fluent signers and less-fluent signers share common chokepoints on particular words in sentences. However, they diverge in heuristic strategy. Fluent signers, when they make an error, tend to preserve semantic details while altering morpho-syntactic domains. They produce syntactically correct sentences with equivalent meaning to the to-be-reproduced one, but these are not verbatim reproductions of the original sentence. In contrast, less-fluent signers tend to use a more linear strategy, preserving lexical status and word ordering while omitting local inflections, and occasionally resorting to visuo-motoric imitation. Thus, whereas fluent signers readily use top-down scaffolding in their working memory, less fluent signers fail to do so. Implications for current models of working memory across spoken and signed modalities are
Full Text Available Sign language is a visual language used by deaf people. One difficulty of sign language recognition is that sign instances of vary in both motion and shape in three-dimensional (3D space. In this research, we use 3D depth information from hand motions, generated from Microsoft’s Kinect sensor and apply a hierarchical conditional random field (CRF that recognizes hand signs from the hand motions. The proposed method uses a hierarchical CRF to detect candidate segments of signs using hand motions, and then a BoostMap embedding method to verify the hand shapes of the segmented signs. Experiments demonstrated that the proposed method could recognize signs from signed sentence data at a rate of 90.4%.
Hintz, Eric G.; Jones, Michael; Lawler, Jeannette; Bench, Nathan
A traditional accommodation for the deaf or hard-of-hearing in a planetarium show is some type of captioning system or a signer on the floor. Both of these have significant drawbacks given the nature of a planetarium show. Young audience members who are deaf likely don't have the reading skills needed to make a captioning system effective. A signer on the floor requires light which can then splash onto the dome. We have examined the potential of using a Head-Mounted Display (HMD) to provide an American Sign Language (ASL) translation. Our preliminary test used a canned planetarium show with a pre-recorded sound track. Since many astronomical objects don't have official ASL signs, the signer had to use classifiers to describe the different objects. Since these are not official signs, these classifiers provided a way to test to see if students were picking up the information using the HMD.We will present results that demonstrate that the use of HMDs is at least as effective as projecting a signer on the dome. This also showed that the HMD could provide the necessary accommodation for students for whom captioning was ineffective. We will also discuss the current effort to provide a live signer without the light splash effect and our early results on teaching effectiveness with HMDs.This work is partially supported by funding from the National Science Foundation grant IIS-1124548 and the Sorenson Foundation.
Lu, Aitao; Yu, Yanping; Niu, Jiaxin; Zhang, John X
The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words), in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2), compound words with one sign (CW-1), and compound words with two signs (CW-2), but not in derivational words with one sign (DW-1), with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Full Text Available The present study was carried out to investigate whether sign language structure plays a role in the processing of complex words (i.e., derivational and compound words, in particular, the delay of complex word reading in deaf adolescents. Chinese deaf adolescents were found to respond faster to derivational words than to compound words for one-sign-structure words, but showed comparable performance for two-sign-structure words. For both derivational and compound words, response latencies to one-sign-structure words were shorter than to two-sign-structure words. These results provide strong evidence that the structure of sign language affects written word processing in Chinese. Additionally, differences between derivational and compound words in the one-sign-structure condition indicate that Chinese deaf adolescents acquire print morphological awareness. The results also showed that delayed word reading was found in derivational words with two signs (DW-2, compound words with one sign (CW-1, and compound words with two signs (CW-2, but not in derivational words with one sign (DW-1, with the delay being maximum in DW-2, medium in CW-2, and minimum in CW-1, suggesting that the structure of sign language has an impact on the delayed processing of Chinese written words in deaf adolescents. These results provide insight into the mechanisms about how sign language structure affects written word processing and its delayed processing relative to their hearing peers of the same age.
Aleksandra KAROVSKA RISTOVSKA
Full Text Available Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural language, used by the community of Deaf in the Republic of Macedonia. This doctoral paper aimed towards the analyses of the characteristics of the Macedonian Sign Language: its phonology, morphology and syntax as well as towards the comparison of the Macedonian and the American Sign Language. William Stokoe was the first one who in the 1960’s started the research of the American Sign Language. He set the base of the linguistic research in sign languages. The analysis of the signs in the Macedonian Sign Language was made according Stokoe’s parameters: location, hand shape and movement. Lexicostatistics showed that MSL and ASL belong to a different language family. Beside this fact, they share some iconic signs, whose presence can be attributed to the phenomena of lexical borrowings. Phonologically, in ASL and MSL, if we make a change of one of Stokoe’s categories, the meaning of the word changes as well. Non-manual signs which are grammatical markers in sign languages are identical in ASL and MSL. The production of compounds and the production of plural forms are identical in both sign languages. The inflection of verbs is also identical. The research showed that the most common order of words in ASL and MSL is the SVO order (subject-verb-object, while the SOV and OVS order can seldom be met. Questions and negative sentences are produced identically in ASL and MSL.
Kushalnagar, Poorna; Naturale, Joan; Paludneviciene, Raylene; Smith, Scott R.; Werfel, Emily; Doolittle, Richard; Jacobs, Stephen; DeCaro, James
To date, there have been efforts towards creating better health information access for Deaf American Sign Language (ASL) users. However, the usability of websites with access to health information in ASL has not been evaluated. Our paper focuses on the usability of four health websites that include ASL videos. We seek to obtain ASL users’ perspectives on the navigation of these ASL-accessible websites, finding the health information that they needed, and perceived ease of understanding ASL video content. ASL users (N=32) were instructed to find specific information on four ASL-accessible websites, and answered questions related to: 1) navigation to find the task, 2) website usability, and 3) ease of understanding ASL video content for each of the four websites. Participants also gave feedback on what they would like to see in an ASL health library website, including the benefit of added captioning and/or signer model to medical illustration of health videos. Participants who had lower health literacy had greater difficulty in finding information on ASL-accessible health websites. This paper also describes the participants’ preferences for an ideal ASL-accessible health website, and concludes with a discussion on the role of accessible websites in promoting health literacy in ASL users. PMID:24901350
Full Text Available and services. One such mechanism is by embedding animated Sign Language in Web pages. This paper analyses the effectiveness and appropriateness of using this approach by embedding South African Sign Language in the South African National Accessibility Portal...
Costello, B.; Fernández, J.; Landa, A.; Quadros, R.; Möller de Quadros,
This paper examines the concept of a native language user and looks at the different definitions of native signer within the field of sign language research. A description of the deaf signing population in the Basque Country shows that the figure of 5-10% typically cited for deaf individuals born
Slovene Sign Language (SZJ) has as yet received little attention from linguists. This article presents some basic facts about SZJ, its history, current status, and a description of the Slovene Sign Language Corpus and Pilot Grammar (SIGNOR) project, which compiled and annotated a representative corpus of SZJ. Finally, selected quantitative data…
Vinson, David; Perniss, Pamela; Fox, Neil; Vigliocco, Gabriella
Previous studies show that reading sentences about actions leads to specific motor activity associated with actually performing those actions. We investigate how sign language input may modulate motor activation, using British Sign Language (BSL) sentences, some of which explicitly encode direction of motion, versus written English, where motion…
Haug, Tobias; Herman, Rosalind; Woll, Bencie
This paper presents the features of an online test framework for a receptive skills test that has been adapted, based on a British template, into different sign languages. The online test includes features that meet the needs of the different sign language versions. Features such as usability of the test, automatic saving of scores, and score…
Beal-Alvarez, Jennifer S.
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…
Thompson, Robin L.; Vinson, David P.; Vigliocco, Gabriella
Signed languages exploit the visual/gestural modality to create iconic expression across a wide range of basic conceptual structures in which the phonetic resources of the language are built up into an analogue of a mental image (Taub, 2001). Previously, we demonstrated a processing advantage when iconic properties of signs were made salient in a…
Full Text Available Abstract – The current translation market places growing emphasis on technological tools that assist or even replace the translator in quickly producing adequate target texts. As a person involved in cultural processes that affect public discourse and society at large, both as a practising literary translator and as a teacher of translation, I feel that academia should not only pursue market-oriented translation skills, such as procedural knowledge of computer-assisted translation (CAT-tools and machine translation (MT, but also aim at strengthening would-be translators' processes of interpretation and making them autonomous language experts, aware of both the effects generated by language and their responsibility in using it. To support my position, I will draw on cognitive linguistics and critical discourse analysis (CDA. Adopting a constructivist approach, I will then refer to works by Kiraly (2000, Venuti (2013 and Laviosa (2014, and add some methodological proposals. Students will initially work individually and in groups, focusing on source texts, their translations and comparable texts in order to identify key language items and work toward meaning. By deploying CDA analytical tools, they will discuss the role played by individual items as well as the overall effect of both STs and TTs. New source texts will then be analysed in preparation for translation. The actual translation, effect analysis and final editing, carried out as team work, will complete a cycle aimed at 1 helping students to build knowledge through experience; 2 sensitising them to the complexity of the translation process and the paramount value of meaning-making within every single context.Riassunto – Il settore della traduzione attribuisce crescente importanza a strumenti tecnologici che aiutano o sostituiscono il traduttore nella rapida produzione di testi adeguati. In qualità di traduttrice letteraria e docente, coinvolta quindi in processi culturali che possono
Full Text Available This article analyses the politics of English, and translation into Englishness, in the film Dirty Pretty Things (Frears. With a celebrated multilingual cast, some of whom did not speak much English, the film nevertheless unfolds in English as it follows migrant characters living illegally and on the margins in London. We take up the filmic representation of migrants in the “compromised, impure and internally divided” border spaces of Britain (Gibson 694 as one of translation into the imagined nation (Anderson. Dirty Pretty Things might seem in its style to be a kind of multicultural “foreignized translation” which reflects a heteropoetics of difference (Venuti; instead, we argue that Dirty Pretty Things, through its performance of the labour of learning and speaking English, strong accents, and cultural allusions, is a kind of domesticated translation (Venuti that homogenises cultural difference into a literary, mythological English and Englishness. Prompted by new moral panics over immigration and recent UK policies that heap further requirements on migrants to speak English in order to belong to “One Nation Britain” (Cameron, we argue that the film offers insights into how the politics of British national belonging continue to be defined by conformity to a type of deserving subject, one who labours to learn English and to translate herself into narrow, recognizably English cultural forms. By attending to the subtleties of language in the film, we trace the pressure on migrants to translate themselves into the linguistic and mythological moulds of their new host society.
Strickland, Brent; Geraci, Carlo; Chemla, Emmanuel; Schlenker, Philippe; Kelepir, Meltem; Pfau, Roland
According to a theoretical tradition dating back to Aristotle, verbs can be classified into two broad categories. Telic verbs (e.g., "decide," "sell," "die") encode a logical endpoint, whereas atelic verbs (e.g., "think," "negotiate," "run") do not, and the denoted event could therefore logically continue indefinitely. Here we show that sign languages encode telicity in a seemingly universal way and moreover that even nonsigners lacking any prior experience with sign language understand these encodings. In experiments 1-5, nonsigning English speakers accurately distinguished between telic (e.g., "decide") and atelic (e.g., "think") signs from (the historically unrelated) Italian Sign Language, Sign Language of the Netherlands, and Turkish Sign Language. These results were not due to participants' inferring that the sign merely imitated the action in question. In experiment 6, we used pseudosigns to show that the presence of a salient visual boundary at the end of a gesture was sufficient to elicit telic interpretations, whereas repeated movement without salient boundaries elicited atelic interpretations. Experiments 7-10 confirmed that these visual cues were used by all of the sign languages studied here. Together, these results suggest that signers and nonsigners share universally accessible notions of telicity as well as universally accessible "mapping biases" between telicity and visual form.
Quinto-Pozos, David; Singleton, Jenny L; Hauser, Peter C
This article describes the case of a deaf native signer of American Sign Language (ASL) with a specific language impairment (SLI). School records documented normal cognitive development but atypical language development. Data include school records; interviews with the child, his mother, and school professionals; ASL and English evaluations; and a comprehensive neuropsychological and psychoeducational evaluation, and they span an approximate period of 7.5 years (11;10-19;6) including scores from school records (11;10-16;5) and a 3.5-year period (15;10-19;6) during which we collected linguistic and neuropsychological data. Results revealed that this student has average intelligence, intact visual perceptual skills, visuospatial skills, and motor skills but demonstrates challenges with some memory and sequential processing tasks. Scores from ASL testing signaled language impairment and marked difficulty with fingerspelling. The student also had significant deficits in English vocabulary, spelling, reading comprehension, reading fluency, and writing. Accepted SLI diagnostic criteria exclude deaf individuals from an SLI diagnosis, but the authors propose modified criteria in this work. The results of this study have practical implications for professionals including school psychologists, speech language pathologists, and ASL specialists. The results also support the theoretical argument that SLI can be evident regardless of the modality in which it is communicated. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org.
Do Amaral, Wanessa Machado; de Martino, José Mario
Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.
Kushalnagar, Poorna; Smith, Scott; Hopper, Melinda; Ryan, Claire; Rinkevich, Micah; Kushalnagar, Raja
People with relatively limited English language proficiency find the Internet's cancer and health information difficult to access and understand. The presence of unfamiliar words and complex grammar make this particularly difficult for Deaf people. Unfortunately, current technology does not support low-cost, accurate translations of online materials into American Sign Language. However, current technology is relatively more advanced in allowing text simplification, while retaining content. This research team developed a two-step approach for simplifying cancer and other health text. They then tested the approach, using a crossover design with a sample of 36 deaf and 38 hearing college students. Results indicated that hearing college students did well on both the original and simplified text versions. Deaf college students' comprehension, in contrast, significantly benefitted from the simplified text. This two-step translation process offers a strategy that may improve the accessibility of Internet information for Deaf, as well as other low-literacy individuals.
Hänel-Faulhaber, Barbara; Skotara, Nils; Kügow, Monique; Salden, Uta; Bottari, Davide; Röder, Brigitte
The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes. Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity. ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.
Heiman, Erica; Haynes, Sharon; McKee, Michael
Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. We sought to characterize the self-reported sexual behaviors of Deaf individuals. Responses from 282 Deaf participants aged 18-64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N = 1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs 58.1%), lower-income Deaf (44.4% vs 69.7%) and among less educated Deaf (31.3% vs 57.7%) than among respondents from corresponding general population groups. Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. Copyright © 2015 Elsevier Inc. All rights reserved.
Heiman, Erica; Haynes, Sharon; McKee, Michael
Background Little is known about the sexual health behaviors of Deaf American Sign Language (ASL) users. Objective We sought to characterize the self-reported sexual behaviors of Deaf individuals. Methods Responses from 282 Deaf participants aged 18–64 from the greater Rochester, NY area who participated in the 2008 Deaf Health were analyzed. These data were compared with weighted data from a general population comparison group (N=1890). We looked at four sexual health-related outcomes: abstinence within the past year; number of sexual partners within the last year; condom use at last intercourse; and ever tested for HIV. We performed descriptive analyses, including stratification by gender, age, income, marital status, and educational level. Results Deaf respondents were more likely than the general population respondents to self-report two or more sexual partners in the past year (30.9% vs 10.1%) but self-reported higher condom use at last intercourse (28.0% vs 19.8%). HIV testing rates were similar between groups (47.5% vs 49.4%) but lower for certain Deaf groups: Deaf women (46.0% vs. 58.1%), lower-income Deaf (44.4% vs. 69.7%) and among less educated Deaf (31.3% vs. 57.7%) than among respondents from corresponding general population groups. Conclusion Deaf respondents self-reported higher numbers of sexual partners over the past year compared to the general population. Condom use was higher among Deaf participants. HIV was similar between groups, though HIV testing was significantly lower among lower-income, less well-educated, and female Deaf respondents. Deaf individuals have a sexual health risk profile that is distinct from that of the general population. PMID:26242551
Balk, Ethan M; Chung, Mei; Chen, Minghua L; Chang, Lina Kong Win; Trikalinos, Thomas A
Google Translate offers free Web-based translation, but it is unknown whether its translation accuracy is sufficient to use in systematic reviews to mitigate concerns about language bias. We compared data extraction from non-English language studies with extraction from translations by Google Translate of 10 studies in each of five languages (Chinese, French, German, Japanese and Spanish). Fluent speakers double-extracted original-language articles. Researchers who did not speak the given language double-extracted translated articles along with 10 additional English language trials. Using the original language extractions as a gold standard, we estimated the probability and odds ratio of correctly extracting items from translated articles compared with English, adjusting for reviewer and language. Translation required about 30 minutes per article and extraction of translated articles required additional extraction time. The likelihood of correct extractions was greater for study design and intervention domain items than for outcome descriptions and, particularly, study results. Translated Spanish articles yielded the highest percentage of items (93%) that were correctly extracted more than half the time (followed by German and Japanese 89%, French 85%, and Chinese 78%) but Chinese articles yielded the highest percentage of items (41%) that were correctly extracted >98% of the time (followed by Spanish 30%, French 26%, German 22%, and Japanese 19%). In general, extractors' confidence in translations was not associated with their accuracy. Translation by Google Translate generally required few resources. Based on our analysis of translations from five languages, using machine translation has the potential to reduce language bias in systematic reviews; however, pending additional empirical data, reviewers should be cautious about using translated data. There remains a trade-off between completeness of systematic reviews (including all available studies) and risk of
Fuentes, Mariana; Tolchinsky, Liliana
Linguistic descriptions of sign languages are important to the recognition of their linguistic status. These languages are an essential part of the cultural heritage of the communities that create and use them and vital in the education of deaf children. They are also the reference point in language acquisition studies. Ours is exploratory…
Mounty, Judith L.; Pucci, Concetta T.; Harmon, Kristen C.
A primary tenet underlying American Sign Language/English bilingual education for deaf students is that early access to a visual language, developed in conjunction with language planning principles, provides a foundation for literacy in English. The goal of this study is to obtain an emic perspective on bilingual deaf readers transitioning from…
Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa
Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available Text corpus size is an important issue when building a language model (LM. This is a particularly important issue for languages where little data is available. This paper introduces an LM adaptation technique to improve an LM built using a small amount of task-dependent text with the help of a machine-translated text corpus. Icelandic speech recognition experiments were performed using data, machine translated (MT from English to Icelandic on a word-by-word and sentence-by-sentence basis. LM interpolation using the baseline LM and an LM built from either word-by-word or sentence-by-sentence translated text reduced the word error rate significantly when manually obtained utterances used as a baseline were very sparse.
Memiş, Abbas; Albayrak, Songül
This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.
Ortega, G.; Morgan, G.
There is growing interest in learners' cognitive capacities to process a second language (L2) at first exposure to the target language. Evidence suggests that L2 learners are capable of processing novel words by exploiting phonological information from their first language (L1). Hearing adult
Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi
Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to
Kiing, Jennifer S H; Rajgor, Dimple; Toh, Teck-Hock
Translation of developmental-behavioral screening tools for use worldwide can be daunting. We summarize issues in translating these tools. METHODS: Instead of a theoretical framework of "equivalence" by Pena and International Test Commission guidelines, we decided upon a practical approach used by the American Association of Orthopedic Surgeons (AAOS). We derived vignettes from the Parents' Evaluation of Developmental Status manual and published literature and mapped them to AAOS. RESULTS: We found that a systematic approach to planning and translating developmental-behavioral screeners is essential to ensure "equivalence" and encourage wide consultation with experts. CONCLUSION: Our narrative highlights how translations can result in many challenges and needed revisions to achieve "equivalence" such that the items remain consistent, valid, and meaningful in the new language for use in different cultures. Information sharing across the community of researchers is encouraged. This narrative may be helpful to novice researchers. © The Author 2016. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: email@example.com.
Gao, L; Mao, C; Yu, G Y; Peng, X
Objective: To translate the adult comorbidity evaluation-27(ACE-27) index authored by professor JF Piccirillo into Chinese and for the purpose of assessing the possible impact of comorbidity on survival of oral cancer patients and improving cancer staging. Methods: The translation included the following steps, obtaining permission from professor Piccirillo, translation, back translation, language modification, adjusted by the advice from the professors of oral and maxillofacial surgery. The test population included 154 patients who were admitted to Peking University of Stomatology during March 2011. Questionnaire survey was conducted on these patients. Retest of reliability, internal consistency reliability, content validity, and structure validity were performed. Results: The simplified Chinese ACE-27 index was established. The Cronbach's α was 0.821 in the internal consistency reliability test. The Kaiser-Meyer-Olkin (KMO) value of 8 items was 0.859 in the structure validity test. Conclusions: The simplified Chinese ACE-27 index has good feasibility and reliability. It is useful to assess the comorbidity of oral cancer patients.
Newman, Aaron J; Supalla, Ted; Fernandez, Nina; Newport, Elissa L; Bavelier, Daphne
Sign languages used by deaf communities around the world possess the same structural and organizational properties as spoken languages: In particular, they are richly expressive and also tightly grammatically constrained. They therefore offer the opportunity to investigate the extent to which the neural organization for language is modality independent, as well as to identify ways in which modality influences this organization. The fact that sign languages share the visual-manual modality with a nonlinguistic symbolic communicative system-gesture-further allows us to investigate where the boundaries lie between language and symbolic communication more generally. In the present study, we had three goals: to investigate the neural processing of linguistic structure in American Sign Language (using verbs of motion classifier constructions, which may lie at the boundary between language and gesture); to determine whether we could dissociate the brain systems involved in deriving meaning from symbolic communication (including both language and gesture) from those specifically engaged by linguistically structured content (sign language); and to assess whether sign language experience influences the neural systems used for understanding nonlinguistic gesture. The results demonstrated that even sign language constructions that appear on the surface to be similar to gesture are processed within the left-lateralized frontal-temporal network used for spoken languages-supporting claims that these constructions are linguistically structured. Moreover, although nonsigners engage regions involved in human action perception to process communicative, symbolic gestures, signers instead engage parts of the language-processing network-demonstrating an influence of experience on the perception of nonlinguistic stimuli.
Full Text Available This article presents a language experience and self-assessment of proficiency questionnaire for hearing teachers who use Brazilian Sign Language and Portuguese in their teaching practice. By focusing on hearing teachers who work in Deaf education contexts, this questionnaire is presented as a tool that may complement the assessment of linguistic skills of hearing teachers. This proposal takes into account important factors in bilingualism studies such as the importance of knowing the participant’s context with respect to family, professional and social background (KAUFMANN, 2010. This work uses as model the following questionnaires: LEAP-Q (MARIAN; BLUMENFELD; KAUSHANSKAYA, 2007, SLSCO – Sign Language Skills Classroom Observation (REEVES et al., 2000 and the Language Attitude Questionnaire (KAUFMANN, 2010, taking into consideration the different kinds of exposure to Brazilian Sign Language. The questionnaire is designed for bilingual bimodal hearing teachers who work in bilingual schools for the Deaf or who work in the specialized educational department who assistdeaf students.
Nafari, Maryam; Weaver, Chris
Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.
Hosemann, Jana; Herrmann, Annika; Steinbach, Markus; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension. © 2013 Elsevier Ltd. All rights reserved.
Full Text Available Translation is an activity of transferring information from one language into another. In transferring the message, a translator not only renders a language form but also replaces a cultural content. Practically it is because translation itself an activity that involves at least two languages and two cultures (Toury in James: 2000. Translating the text that contains a cultural content and message is more difficult than translating an ordinary text that only has literal meanings. Cultural aspects that include in stereotypes, speech levels, pronouns, idioms, even in proverbs are things that can lead difficulties for translators to translate. He or she sometimes should look for the closest meaning in order the translation products can be accepted in the target language culture.
No formal Canadian curriculum presently exists for teaching American Sign Language (ASL) as a second language to parents of deaf and hard of hearing children. However, this group of ASL learners is in need of more comprehensive, research-based support, given the rapid expansion in Canada of universal neonatal hearing screening and the…
The language-based analogical reasoning abilities of Deaf children are a controversial topic. Researchers lack agreement about whether Deaf children possess the ability to reason using language-based analogies, or whether this ability is limited by a lack of access to vocabulary, both written and signed. This dissertation examines factors that…
Fitzpatrick, Elizabeth M; Stevens, Adrienne; Garritty, Chantelle; Moher, David
Permanent childhood hearing loss affects 1 to 3 per 1000 children and frequently disrupts typical spoken language acquisition. Early identification of hearing loss through universal newborn hearing screening and the use of new hearing technologies including cochlear implants make spoken language an option for most children. However, there is no consensus on what constitutes optimal interventions for children when spoken language is the desired outcome. Intervention and educational approaches ranging from oral language only to oral language combined with various forms of sign language have evolved. Parents are therefore faced with important decisions in the first months of their child's life. This article presents the protocol for a systematic review of the effects of using sign language in combination with oral language intervention on spoken language acquisition. Studies addressing early intervention will be selected in which therapy involving oral language intervention and any form of sign language or sign support is used. Comparison groups will include children in early oral language intervention programs without sign support. The primary outcomes of interest to be examined include all measures of auditory, vocabulary, language, speech production, and speech intelligibility skills. We will include randomized controlled trials, controlled clinical trials, and other quasi-experimental designs that include comparator groups as well as prospective and retrospective cohort studies. Case-control, cross-sectional, case series, and case studies will be excluded. Several electronic databases will be searched (for example, MEDLINE, EMBASE, CINAHL, PsycINFO) as well as grey literature and key websites. We anticipate that a narrative synthesis of the evidence will be required. We will carry out meta-analysis for outcomes if clinical similarity, quantity and quality permit quantitative pooling of data. We will conduct subgroup analyses if possible according to severity
Werngren-Elgström, Monica; Brandt, Ase; Iwarsson, Susanne
The purpose of this study was to describe the everyday activities and social contacts among older deaf sign language users, and to investigate relationships between these phenomena and the health and well-being within this group. The study population comprised deaf sign language users, 65 years...... or older, in Sweden. Data collection was based on interviews in sign language, including open-ended questions covering everyday activities and social contacts as well as self-rated instruments measuring aspects of health and subjective well-being. The results demonstrated that the group of participants...... aspects of health and subjective well-being and the frequency of social contacts with family/relatives or visiting the deaf club and meeting friends. It is concluded that the variety of activities at the deaf clubs are important for the subjective well-being of older deaf sign language users. Further...
Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡
Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.
Haug, T.; Bontempo, K.; Leeson, L.; Napier, J.; Nicodemus, B.; Van den Bogaerde, B.; Vermeerbergen, M.
In this paper, we report interview data from 14 Deaf leaders across seven countries (Australia, Belgium, Ireland, the Netherlands, Switzerland, the United Kingdom, and the United States) regarding their perspectives on signed language interpreters. Using a semistructured survey questionnaire, seven
, particularly sign language users, in HIV-prevention programmes. Keywords: communication, disability, disability studies, hearing impairment, qualitative research, scoping study. African Journal of AIDS Research 2010, 9(3): 307–313 ...
of a digital medium and an existing body of descriptive research on the language, ... ing lexemes and word class in a polysynthetic language, deriving usage ..... higher education, white-collar occupations, the arts, media, and political advo- cacy. ..... and Niemalä explain that if mouth patterns are treated as a formational ele-.
Jednoróg, Katarzyna; Bola, Łukasz; Mostowski, Piotr; Szwed, Marcin; Boguszewski, Paweł M; Marchewka, Artur; Rutkowski, Paweł
In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. Copyright © 2015 Elsevier Ltd. All rights reserved.
Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.
Razdobudko-Čović Larisa I.
Full Text Available The paper presents an analysis of two Serbian translations of V. Nabokov's memoirs, that is the translation of the novel 'Drugie berega' ('The Other Shores' published in Russian as an authorized translation from the original English version 'Conclusive Evidence', and the translation of Nabokov's authorized translation from Russian to English entitled 'Speak, Memory'. Creolization of three models of culture in translation from the two originals - Russian and English - Is presented. Specific features of the two Serbian translations are analyzed, and a survey of characteristic mistakes caused by some specific characteristics of the source language is given. Also, Nabokov's very original approach to translation which is quite interpretative is highlighted.
Colwell, Cynthia; Memmott, Jenny; Meeker-Miller, Anne
The purpose of this study was to determine the efficacy of using music and/or sign language to promote early communication in infants and toddlers (6-20 months) and to enhance parent-child interactions. Three groups used for this study were pairs of participants (care-giver(s) and child) assigned to each group: 1) Music Alone 2) Sign Language…
Atkinson, J.; Marshall, J.; Woll, B.; Thacker, A.
Recent imaging (e.g., MacSweeney et al., 2002) and lesion (Hickok, Love-Geffen, & Klima, 2002) studies suggest that sign language comprehension depends primarily on left hemisphere structures. However, this may not be true of all aspects of comprehension. For example, there is evidence that the processing of topographic space in sign may be…
Perniss, Pamela; Özyürek, Asli; Morgan, Gary
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems. Copyright © 2015 Cognitive Science Society, Inc.
van den Bogaerde, B.; de Lange, R.; Nicodemus, B.; Metzger, M.
In healthcare, the accuracy of interpretation is the most critical component of safe and effective communication between providers and patients in medical settings characterized by language and cultural barriers. Although medical education should prepare healthcare providers for common issues they
De Meulder, Maartje
This article describes and analyses the pathway to the British Sign Language (Scotland) Bill and the strategies used to reach it. Data collection has been done by means of interviews with key players, analysis of official documents, and participant observation. The article discusses the bill in relation to the Gaelic Language (Scotland) Act 2005…
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Khokhlova A. Yu.
Full Text Available The article provides an overview of foreign psychological publications concerning the sign language as a means of communication in deaf people. The article addresses the question of sing language's impact on cognitive development, efficiency and positive way of interacting with parents as well as academic achievement increase in deaf children.
Hermans, D.; Knoors, H.E.T.; Verhoeven, L.T.W.
In this article, we will describe the development of an assessment instrument for Sign Language of the Netherlands (SLN) for deaf children in bilingual education programs. The assessment instrument consists of nine computerized tests in which the receptive and expressive language skills of deaf
Corina, David P.; Lawyer, Laurel A.; Cates, Deborah
Studies of deaf individuals who are users of signed languages have provided profound insight into the neural representation of human language. Case studies of deaf signers who have incurred left- and right-hemisphere damage have shown that left-hemisphere resources are a necessary component of sign language processing. These data suggest that, despite frank differences in the input and output modality of language, core left perisylvian regions universally serve linguistic function. Neuroimaging studies of deaf signers have generally provided support for this claim. However, more fine-tuned studies of linguistic processing in deaf signers are beginning to show evidence of important differences in the representation of signed and spoken languages. In this paper, we provide a critical review of this literature and present compelling evidence for language-specific cortical representations in deaf signers. These data lend support to the claim that the neural representation of language may show substantive cross-linguistic differences. We discuss the theoretical implications of these findings with respect to an emerging understanding of the neurobiology of language. PMID:23293624
Mann, Wolfgang; Peña, Elizabeth D; Morgan, Gary
We describe a model for assessment of lexical-semantic organization skills in American Sign Language (ASL) within the framework of dynamic vocabulary assessment and discuss the applicability and validity of the use of mediated learning experiences (MLE) with deaf signing children. Two elementary students (ages 7;6 and 8;4) completed a set of four vocabulary tasks and received two 30-minute mediations in ASL. Each session consisted of several scripted activities focusing on the use of categorization. Both had experienced difficulties in providing categorically related responses in one of the vocabulary tasks used previously. Results showed that the two students exhibited notable differences with regards to their learning pace, information uptake, and effort required by the mediator. Furthermore, we observed signs of a shift in strategic behavior by the lower performing student during the second mediation. Results suggest that the use of dynamic assessment procedures in a vocabulary context was helpful in understanding children's strategies as related to learning potential. These results are discussed in terms of deaf children's cognitive modifiability with implications for planning instruction and how MLE can be used with a population that uses ASL. The reader will (1) recognize the challenges in appropriate language assessment of deaf signing children; (2) recall the three areas explored to investigate whether a dynamic assessment approach is sensitive to differences in deaf signing children's language learning profiles (3) discuss how dynamic assessment procedures can make deaf signing children's individual language learning differences visible. Copyright © 2014 Elsevier Inc. All rights reserved.
Solís, José F.; Toxqui, Carina; Padilla, Alfonso; Santiago, César
A frame work for static sign language recognition using descriptors which represents 2D images in 1D data and artificial neural networks is presented in this work. The 1D descriptors were computed by two methods, first one consists in a correlation rotational operator.1 and second is based on contour analysis of hand shape. One of the main problems in sign language recognition is segmentation; most of papers report a special color in gloves or background for hand shape analysis. In order to avoid the use of gloves or special clothing, a thermal imaging camera was used to capture images. Static signs were picked up from 1 to 9 digits of American Sign Language, a multilayer perceptron reached 100% recognition with cross-validation.
Grove, Nicola; Woll, Bencie
Manual signing is one of the most widely used approaches to support the communication and language skills of children and adults who have intellectual or developmental disabilities, and problems with communication in spoken language. A recent series of papers reporting findings from this population raises critical issues for professionals in the assessment of multimodal language skills of key word signers. Approaches to assessment will differ depending on whether key word signing (KWS) is viewed as discrete from, or related to, natural sign languages. Two available assessments from these different perspectives are compared. Procedures appropriate to the assessment of sign language production are recommended as a valuable addition to the clinician's toolkit. Sign and speech need to be viewed as multimodal, complementary communicative endeavours, rather than as polarities. Whilst narrative has been shown to be a fruitful context for eliciting language samples, assessments for adult users should be designed to suit the strengths, needs and values of adult signers with intellectual disabilities, using materials that are compatible with their life course stage rather than those designed for young children. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Languages are composed of a conventionalized system of parts which allow speakers and signers to compose an infinite number of form-meaning mappings through phonological and morphological combinations. This level of linguistic organization distinguishes language from other communicative acts such as gestures. In contrast to signs, gestures are made up of meaning units that are mostly holistic. Children exposed to signed and spoken languages from early in life develop grammatical structure following similar rates and patterns. This is interesting, because signed languages are perceived and articulated in very different ways to their spoken counterparts with many signs displaying surface resemblances to gestures. The acquisition of forms and meanings in child signers and talkers might thus have been a different process. Yet in one sense both groups are faced with a similar problem: 'how do I make a language with combinatorial structure’? In this paper I argue first language development itself enables this to happen and by broadly similar mechanisms across modalities. Combinatorial structure is the outcome of phonological simplifications and productivity in using verb morphology by children in sign and speech.
Kasar, Sündüz; Tuna, Didem
Among the literary genres, poetry is the one that resists translation the most. Creating a new and innovative language that breaks the usual rules of the standard language with brand-new uses and meanings is probably one of the most important goals of the poet. Poetry challenges the translator to capture not only original images, exceptional…
Enkin, Elizabeth; Mejias-Bikani, Errapel
In this paper, we discuss the benefits of using online translators in the foreign language classroom. Specifically, we discuss how faulty online translator output can be used to create activities that help raise metalinguistic awareness of second language grammar and of the differences between grammatical constructions in the first and second…
Full Text Available We describe here the characteristics of a very frequently-occurring ASL indefinite focus particle, which has not previously been recognized as such. We show here that, despite its similarity to the question sign "WHAT", the particle is distinct from that sign in terms of articulation, function, and distribution. The particle serves to express "uncertainty" in various ways, which can be formalized semantically in terms of a domain-widening effect of the same sort as that proposed for English "any" by Kadmon & Landman (1993. Its function is to widen the domain of possibilities under consideration from the typical to include the non-typical as well, along a dimension appropriate in the context.
Kocab, Annemarie; Pyers, Jennie; Senghas, Ann
Even the simplest narratives combine multiple strands of information, integrating different characters and their actions by expressing multiple perspectives of events. We examined the emergence of referential shift devices, which indicate changes among these perspectives, in Nicaraguan Sign Language (NSL). Sign languages, like spoken languages, mark referential shift grammatically with a shift in deictic perspective. In addition, sign languages can mark the shift with a point or a movement of the body to a specified spatial location in the three-dimensional space in front of the signer, capitalizing on the spatial affordances of the manual modality. We asked whether the use of space to mark referential shift emerges early in a new sign language by comparing the first two age cohorts of deaf signers of NSL. Eight first-cohort signers and 10 second-cohort signers watched video vignettes and described them in NSL. Narratives were coded for lexical (use of words) and spatial (use of signing space) devices. Although the cohorts did not differ significantly in the number of perspectives represented, second-cohort signers used referential shift devices to explicitly mark a shift in perspective in more of their narratives. Furthermore, while there was no significant difference between cohorts in the use of non-spatial, lexical devices, there was a difference in spatial devices, with second-cohort signers using them in significantly more of their narratives. This suggests that spatial devices have only recently increased as systematic markers of referential shift. Spatial referential shift devices may have emerged more slowly because they depend on the establishment of fundamental spatial conventions in the language. While the modality of sign languages can ultimately engender the syntactic use of three-dimensional space, we propose that a language must first develop systematic spatial distinctions before harnessing space for grammatical functions.
Janke, Vikki; Marshall, Chloë R
An ongoing issue of interest in second language research concerns what transfers from a speaker's first language to their second. For learners of a sign language, gesture is a potential substrate for transfer. Our study provides a novel test of gestural production by eliciting silent gesture from novices in a controlled environment. We focus on spatial relationships, which in sign languages are represented in a very iconic way using the hands, and which one might therefore predict to be easy for adult learners to acquire. However, a previous study by Marshall and Morgan (2015) revealed that this was only partly the case: in a task that required them to express the relative locations of objects, hearing adult learners of British Sign Language (BSL) could represent objects' locations and orientations correctly, but had difficulty selecting the correct handshapes to represent the objects themselves. If hearing adults are indeed drawing upon their gestural resources when learning sign languages, then their difficulties may have stemmed from their having in manual gesture only a limited repertoire of handshapes to draw upon, or, alternatively, from having too broad a repertoire. If the first hypothesis is correct, the challenge for learners is to extend their handshape repertoire, but if the second is correct, the challenge is instead to narrow down to the handshapes appropriate for that particular sign language. 30 sign-naïve hearing adults were tested on Marshall and Morgan's task. All used some handshapes that were different from those used by native BSL signers and learners, and the set of handshapes used by the group as a whole was larger than that employed by native signers and learners. Our findings suggest that a key challenge when learning to express locative relations might be reducing from a very large set of gestural resources, rather than supplementing a restricted one, in order to converge on the conventionalized classifier system that forms part of the
Clark, M. Diane; Hauser, Peter C.; Miller, Paul; Kargin, Tevhide; Rathmann, Christian; Guldenoglu, Birkan; Kubus, Okan; Spurgeon, Erin; Israel, Erica
Researchers have used various theories to explain deaf individuals' reading skills, including the dual route reading theory, the orthographic depth theory, and the early language access theory. This study tested 4 groups of children--hearing with dyslexia, hearing without dyslexia, deaf early signers, and deaf late signers (N = 857)--from 4…
Schneider, Erin; Kozak, L. Viola; Santiago, Roberto; Stephen, Anika
Technological and language innovation often flow in concert with one another. Casual observation by researchers has shown that electronic communication memes, in the form of abbreviations, have found their way into spoken English. This study focuses on the current use of electronic modes of communication, such as cell smartphones, and e-mail, and…
The cultural universe of urban, English-speaking middle class in India shows signs of growing inclusiveness as far as English is concerned. This phenomenon manifests itself in increasing forms of bilingualism (combination of English and one Indian language) in everyday forms of speech - advertisement jingles, bilingual movies, signboards, and of course conversations. It is also evident in the startling prominence of Indian Writing in English and somewhat less visibly, but steadily rising, activity of English translation from Indian languages. Since the eighties this has led to a frenetic activity around English translation in India's academic and literary circles. Kothari makes this very current phenomenon her chief concern in Translating India. The study covers aspects such as the production, reception and marketability of English translation. Through an unusually multi-disciplinary approach, this study situates English translation in India amidst local and global debates on translation, representation an...
In terms of the theoretical framework of an influential recent model of Bible translation, Left Dislocation (=LD) can be regarded as a “communicate clue” that translators must try to interpretively resemble in their target text translation. This exploratory study investigates how twenty translations (fifteen English, three Afrikaans, ...
MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne
When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.
Henner, Jon; Caldwell-Harris, Catherine L; Novogrodsky, Rama; Hoffmeister, Robert
Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6-18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.
Al-Amer, Rasmieh; Ramjan, Lucie; Glew, Paul; Darwish, Maram; Salamonson, Yenna
This paper discusses how a research team negotiated the challenges of language differences in a qualitative study that involved two languages. The lead researcher shared the participants' language and culture, and the interviews were conducted using the Arabic language as a source language, which was then translated and disseminated in the English language (target language). The challenges in relation to translation in cross-cultural research were highlighted from a perspective of establishing meaning as a vital issue in qualitative research. The paper draws on insights gained from a study undertaken among Arabic-speaking participants involving the use of in-depth semi-structured interviews. The study was undertaken using a purposive sample of 15 participants with Type 2 Diabetes Mellitus and co-existing depression and explored their perception of self-care management behaviours. Data analysis was performed in two phases. The first phase entailed translation and transcription of the data, and the second phase entailed thematic analysis of the data to develop categories and themes. In this paper there is discussion on the translation process and its inherent challenges. As translation is an interpretive process and not merely a direct message transfer from a source language to a target language, translators need to systematically and accurately capture the full meaning of the spoken language. This discussion paper highlights difficulties in the translation process, specifically in managing data in relation to metaphors, medical terminology and connotation of the text, and importantly, preserving the meaning between the original and translated data. Recommendations for future qualitative studies involving interviews with non-English speaking participants are outlined, which may assist researchers maintain the integrity of the data throughout the translation process. Copyright © 2015 Elsevier Ltd. All rights reserved.
Hall, Wyatte C
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
Liu, Lanfang; Yan, Xin; Liu, Jin; Xia, Mingrui; Lu, Chunming; Emmorey, Karen; Chu, Mingyuan; Ding, Guosheng
Signed languages are natural human languages using the visual-motor modality. Previous neuroimaging studies based on univariate activation analysis show that a widely overlapped cortical network is recruited regardless whether the sign language is comprehended (for signers) or not (for non-signers). Here we move beyond previous studies by examining whether the functional connectivity profiles and the underlying organizational structure of the overlapped neural network may differ between signers and non-signers when watching sign language. Using graph theoretical analysis (GTA) and fMRI, we compared the large-scale functional network organization in hearing signers with non-signers during the observation of sentences in Chinese Sign Language. We found that signed sentences elicited highly similar cortical activations in the two groups of participants, with slightly larger responses within the left frontal and left temporal gyrus in signers than in non-signers. Crucially, further GTA revealed substantial group differences in the topologies of this activation network. Globally, the network engaged by signers showed higher local efficiency (t (24) =2.379, p=0.026), small-worldness (t (24) =2.604, p=0.016) and modularity (t (24) =3.513, p=0.002), and exhibited different modular structures, compared to the network engaged by non-signers. Locally, the left ventral pars opercularis served as a network hub in the signer group but not in the non-signer group. These findings suggest that, despite overlap in cortical activation, the neural substrates underlying sign language comprehension are distinguishable at the network level from those for the processing of gestural action. Copyright © 2017 Elsevier B.V. All rights reserved.
Cannon, Joanna E.; Fredrick, Laura D.; Easterbrooks, Susan R.
Reading to children improves vocabulary acquisition through incidental exposure, and it is a best practice for parents and teachers of children who can hear. Children who are deaf or hard of hearing are at risk for not learning vocabulary as such. This article describes a procedure for using books read on DVD in American Sign Language with…
Östling, Robert; Börstell, Carl; Courtaux, Servane
We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis.
Tomasuolo, Elena; Valeri, Giovanni; Di Renzo, Alessio; Pasqualetti, Patrizio; Volterra, Virginia
The present study examined whether full access to sign language as a medium for instruction could influence performance in Theory of Mind (ToM) tasks. Three groups of Italian participants (age range: 6-14 years) participated in the study: Two groups of deaf signing children and one group of hearing-speaking children. The two groups of deaf children differed only in their school environment: One group attended a school with a teaching assistant (TA; Sign Language is offered only by the TA to a single deaf child), and the other group attended a bilingual program (Italian Sign Language and Italian). Linguistic abilities and understanding of false belief were assessed using similar materials and procedures in spoken Italian with hearing children and in Italian Sign Language with deaf children. Deaf children attending the bilingual school performed significantly better than deaf children attending school with the TA in tasks assessing lexical comprehension and ToM, whereas the performance of hearing children was in between that of the two deaf groups. As for lexical production, deaf children attending the bilingual school performed significantly better than the two other groups. No significant differences were found between early and late signers or between children with deaf and hearing parents.
Vargas, Lorena P; Barba, Leiner; Torres, C O; Mattos, L
This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.
It has been observed that data-based translation programs are often used both in and outside the class unconsciously and thus there occurs many problems in foreign language learning and teaching. To draw attention to this problem, with this study, whether the program has satisfactory results or not has been revealed by making translations from…
Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright Â© 2012 Elsevier Inc. All rights reserved.
This specification identifies and describes the principal functions and elements of the Interpretive Code Translator which has been developed for use with the GOAL Compiler. This translator enables the user to convert a compliled GOAL program to a highly general binary format which is designed to enable interpretive execution. The translator program provides user controls which are designed to enable the selection of various output types and formats. These controls provide a means for accommodating many of the implementation options which are discussed in the Interpretive Code Guideline document. The technical design approach is given. The relationship between the translator and the GOAL compiler is explained and the principal functions performed by the Translator are described. Specific constraints regarding the use of the Translator are discussed. The control options are described. These options enable the user to select outputs to be generated by the translator and to control vrious aspects of the translation processing.
.... The Multilingual Automatic Document Classification, Analysis and Translation (MADCAT) program will develop an end-to-end system to automatically translate handwritten and printed foreign documents into English with very high accuracy...
Silvia Teresinha Frizzarini
Full Text Available There are few researches with deeper reflections on the study of algebra with deaf students. In order to validate and disseminate educational activities in that context, this article aims at highlighting the deaf students’ prior knowledge, fluent in Brazilian Sign Language, referring to the algebraic language used in high school. The theoretical framework used was Duval’s theory, with analysis of the changes, by treatment and conversion, of different registers of semiotic representation, in particular inequalities. The methodology used was the application of a diagnostic evaluation performed with deaf students, all fluent in Brazilian Sign Language, in a special school located in the north of Paraná State. We emphasize the need to work in both directions of conversion, in different languages, especially when the starting record is the graphic. Therefore, the conclusion reached was that one should not separate the algebraic representation from other records, due to the need of sign language perform not only the communication function, but also the functions of objectification and treatment, fundamental in cognitive development.
Solís-V., J.-Francisco; Toxqui-Quitl, Carina; Martínez-Martínez, David; H.-G., Margarita
This work presents a framework designed for the Mexican Sign Language (MSL) recognition. A data set was recorded with 24 static signs from the MSL using 5 different versions, this MSL dataset was captured using a digital camera in incoherent light conditions. Digital Image Processing was used to segment hand gestures, a uniform background was selected to avoid using gloved hands or some special markers. Feature extraction was performed by calculating normalized geometric moments of gray scaled signs, then an Artificial Neural Network performs the recognition using a 10-fold cross validation tested in weka, the best result achieved 95.83% of recognition rate.
Richard Michael Mansell
Full Text Available Anecdotal accounts suggest that one reason for the perceived resistance to translated literature in English-language markets is that commissioning editors are averse to considering texts that they cannot read. In an attempt to overcome this barrier, English translations are increasingly commissioned by publishers of source texts and agents of source authors and used to stimulate interest in a book (not just in English-language markets, a phenomenon this article terms ‘source-commissioned translations’. This article considers how this phenomenon indicates a shift in the borders between literatures, how it disrupts accepted commercial practices, and the consequences of this for the industry and the role of English in the global book trade. In particular, it considers consequences for the quality of translations, questions regarding copyright, and the uncertain position for the translator when, at the time of translating, a contract is not in place between the translator and the publisher of the translation.
Liszt Palmeira de Oliveira
Full Text Available OBJECTIVE: to translate the Hip Outcome Score clinical evaluation questionnaire into Portuguese and culturally adapt it for Brazil.METHODS: the Hip Outcome Score questionnaire was translated into Portuguese following the methodology consisting of the steps of translation, back-translation, pretesting and final translation.RESULTS: the pretesting was applied to 30 patients with hip pain without arthrosis. In the domain relating to activities of daily living, there were no difficulties in comprehending the translated questionnaire. In presenting the final translation of the questionnaire, all the questions were understood by more than 85% of the individuals.CONCLUSION: the Hip Outcome Score questionnaire was translated and adapted to the Portuguese language and can be used in clinical evaluation on the hip. Additional studies are underway with the objective of evaluating the reproducibility and validity of the Brazilian translation.
Full Text Available What is translation - a craft, an art, a profession or a job? Although one of the oldest human activities, translation has not still been fully defined, and it is still young in terms of an academic discipline. The paper defines the difference between translation and interpreting and then attempts to find the answer to the question what characteristics, knowledge and skills a translator must have, particularly the one involved in court translation, and where his/her place in the communication process (both written and oral communication is. When translating medical documentation, a translator is set within a medical language environment as an intermediary between two doctors (in other words, two professionals in the process of communication which would be impossible without him, since it is conducted in two different languages. The paper also gives an insight into types of medical documentation and who they are intended for. It gives practical examples of the problems faced in the course of translation of certain types of medical documentation (hospital discharge papers, diagnoses, case reports,.... Is it possible to make this kind of communication between professionals (doctors standardized, which would subsequently make their translation easier? Although great efforts are made in Serbia regarding medical language and medical terminology, the conclusion is that specific problems encountered by translators can hardly be overcome using only dictionaries and translation manuals.
Williams, Joshua T; Darcy, Isabelle; Newman, Sharlene D
The present study tracked activation pattern differences in response to sign language processing by late hearing second language learners of American Sign Language. Learners were scanned before the start of their language courses. They were scanned again after their first semester of instruction and their second, for a total of 10 months of instruction. The study aimed to characterize modality-specific to modality-general processing throughout the acquisition of sign language. Results indicated that before the acquisition of sign language, neural substrates related to modality-specific processing were present. After approximately 45 h of instruction, the learners transitioned into processing signs on a phonological basis (e.g., supramarginal gyrus, putamen). After one more semester of input, learners transitioned once more to a lexico-semantic processing stage (e.g., left inferior frontal gyrus) at which language control mechanisms (e.g., left caudate, cingulate gyrus) were activated. During these transitional steps right hemispheric recruitment was observed, with increasing left-lateralization, which is similar to other native signers and L2 learners of spoken language; however, specialization for sign language processing with activation in the inferior parietal lobule (i.e., angular gyrus), even for late learners, was observed. As such, the present study is the first to track L2 acquisition of sign language learners in order to characterize modality-independent and modality-specific mechanisms for bilingual language processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fenlon, Jordan; Schembri, Adam; Rentelis, Ramas; Cormier, Kearsy
This paper investigates phonological variation in British Sign Language (BSL) signs produced with a ‘1’ hand configuration in citation form. Multivariate analyses of 2084 tokens reveals that handshape variation in these signs is constrained by linguistic factors (e.g., the preceding and following phonological environment, grammatical category, indexicality, lexical frequency). The only significant social factor was region. For the subset of signs where orientation was also investigated, only grammatical function was important (the surrounding phonological environment and social factors were not significant). The implications for an understanding of pointing signs in signed languages are discussed. PMID:23805018
Natalia Ya Bolshunova
Full Text Available The article addresses the differential-psychological aspect of translating abilities as a component of language abilities. The peculiarity of translation is described including both linguistic and paralinguistic aspects of translating a content and a sense from one language into another accompanied by linguistic and cognitive actions. A variety of individual and psychological peculiarities of translation based on the translation dominant were revealed. It was demonstrated that these peculiarities are relevant to communicative and linguistic types of language abilities discovered byM.K. Kabardov. Valid assessment methods such as M.N. Borisova’s test for investigation “artistic” and “thinking” types of Higher Nervous Activity (HNA, D. Wechsler’ test of verbal and nonverbal intelligence, and a test developed by the authors of the article for individual specificity of interpreter’s activity as communicative and linguistic types of translating abilities assessment were used. The results suggest that all the typological differences are based on special human types of HNA. Subjects displaying the “thinking” type use linguistic methods when translating, whereas subjects displaying the “artistic” type try to use their own subjective life experience and extralinguistic methods when translating foreign language constructions. Extreme subjects of both types try to use the most developed components of their special abilities in order to compensate the components of the other type which are not well developed to accomplish some language tasks. In this case subjects of both types can fulfill these tasks rather successfully.
Lu, Jenny; Jones, Anna; Morgan, Gary
There is debate about how input variation influences child language. Most deaf children are exposed to a sign language from their non-fluent hearing parents and experience a delay in exposure to accessible language. A small number of children receive language input from their deaf parents who are fluent signers. Thus it is possible to document the…
Full Text Available In a diachronic perspective from the 16th century to the present, this article inves tigates translated interlinguistic agreement and difference in the use of the temporally marked Slovenian prepositional phrases that appeared in the semantic group of verba dicendi in the first two books of the Old Testament and the New Testament of the old est Slovenian translation of the Bible, from 1584, and that were replaced in the mod em literary language in the 19th century by the introduction of prepositionless or other prepositional patterns. A comparison is made on the basis of Internet publications of parallel sections of six foreign language translations (Latin, German, two English [17th century and modem], French and Russian, and the extent to which these preposition al phrases are covered by older or modem literary Slovenian syntactic patterns is deter mined .
Linguistic ideologies that are left unquestioned and unexplored, especially as reflected and produced in marginalized language communities, can contribute to inequality made real in decisions about languages and the people who use them. One of the primary bodies of knowledge guiding international language policy is the International Organization…
Williams, Joshua T.; Darcy, Isabelle; Newman, Sharlene D.
Understanding how language modality (i.e., signed vs. spoken) affects second language outcomes in hearing adults is important both theoretically and pedagogically, as it can determine the specificity of second language (L2) theory and inform how best to teach a language that uses a new modality. The present study investigated which…
Tiago Hermano Breunig
Full Text Available When inquiring the sign “?”, Flusser postulates that meaning is “one of the main problems of the present times thought.” From the sign above, Flusser differentiates meaning and sense, which defines as “what means”. Thus, the problem of meaning converges with the problem of thought itself, since, according to Flusser, all thoughts come from a tautology, i.e., what “means nothing”. If the understanding of meaning implies the musical aspects of the language, as the sign “?”, according to Flusser, music falls “in the same abyss of tautology” as it overcomes the language limit. Flusser believes that the discussion of language limits contributes to the problem of the meaning of music and confesses that among all the existential signs the “?” is the one that articulates better the situation in which we are. It is in this sense, in this “Stimmung”, as Flusser says about the meaning of the sign “?”, that this paper aims to reflect, from the problem of meaning, on the relationship between music and poetry contemporary to Flusser.
Dean, Robyn K.; Pollard, Robert Q., Jr.
This article uses the framework of demand-control theory to examine the occupation of sign language interpreting. It discusses the environmental, interpersonal, and intrapersonal demands that impinge on the interpreter's decision latitude and notes the prevalence of cumulative trauma disorders, turnover, and burnout in the interpreting profession.…
Mpuang, Kerileng D.; Mukhopadhyay, Sourav; Malatsi, Nelly
This descriptive phenomenological study investigates teachers' experiences of using sign language for learners who are deaf in the primary schools in Botswana. Eight in-service teachers who have had more than ten years of teaching deaf or hard of hearing (DHH) learners were purposively selected for this study. Data were collected using multiple…
Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao
The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.
Mohd Hanafi Mohd Yasin
Full Text Available This research is regarding the readiness of typical student in communication by using sign language in Hearing Impairment Integration Programme. There were 60 typical students from a Special Education Integration Programme of secondary school in Malacca were chosen as research respondents. The instrument of the research was a set of questionnaire which consisted of four parts, namely Student’s demography (Part A, Student’s knowledge (Part B, Student’s ability to communicate (Part C and Student’s interest to communicate (Part D. The questionnaire was adapted from the research of Asnul Dahar and Rabiah's 'The Readiness of Students in Following Vocational Subjects at Jerantut District, Rural Secondary School in Pahang'. Descriptive analysis was used to analysis the data. Mean score was used to determine the level of respondents' perception of each question. The findings showed a positive relationship between typical students towards communication medium by using sign language. Typical students were seen to be interested in communicating using sign language and were willing to attend the Sign Language class if offered.
MacKinnon, Gregory; Soutar, Iris
The Jamaican Association for the Deaf, in their responsibilities to oversee education for individuals who are deaf in Jamaica, has demonstrated an urgent need for a dictionary that assists students, educators, and parents with the practical use of "Jamaican Sign Language." While paper versions of a preliminary resource have been explored…
Beal-Alvarez, Jennifer S.
This article presents results of a longitudinal study of receptive American Sign Language (ASL) skills for a large portion of the student body at a residential school for the deaf across four consecutive years. Scores were analyzed by age, gender, parental hearing status, years attending the residential school, and presence of a disability (i.e.,…
This study compared the effects of Picture Exchange Communication System (PECS) and sign language training on the acquisition of mands (requests for preferred items) of students with autism. The study also examined the differential effects of each modality on students' acquisition of vocal behavior. Participants were two elementary school students…
Karpouzis, K.; Caridakis, G.; Fotinea, S.-E.; Efthimiou, E.
In this paper, we present how creation and dynamic synthesis of linguistic resources of Greek Sign Language (GSL) may serve to support development and provide content to an educational multitask platform for the teaching of GSL in early elementary school classes. The presented system utilizes standard virtual character (VC) animation technologies…
Woodward, James; Hoa, Nguyen Thi
This paper discusses how the Nippon Foundation-funded project "Opening University Education to Deaf People in Viet Nam through Sign Language Analysis, Teaching, and Interpretation," also known as the Dong Nai Deaf Education Project, has been implemented through sign language studies from 2000 through 2012. This project has provided deaf…
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
Alexander G. Kravetsky
Full Text Available The first translations of the New Testament into the Russian language, which were carried out at the beginning of the 19th century, are usually regarded as a missionary project. But the language of these translations may prove that they were addressed to a rather narrow audience. As is known, the Russian Bible Society established in 1812 began its activities not with translations into Russian but with the mass edition of the Church Slavonic text of the Bible. In other words, it was the Church Slavonic Bible that was initially taken as the “Russian” Bible. Such a perception correlated with the sociolinguistic situation of that period, when, among the literate country and town dwellers, people learned grammar according to practices dating back to Medieval Rus’, which meant learning by heart the Church Slavonic alphabet, the Book of Hours, and the Book of Psalms; these readers were in the majority, and they could understand the Church Slavonic Bible much better than they could a Russian-language version. That is why the main audience for the “Russian” Bible was the educated classes who read the Bible in European languages, not in Russian. The numbers of targeted readers for the Russian-language translation of the Bible were significantly lower than those for the Church Slavonic version. The ideas of the “language innovators” (who favored using Russian as a basis for a new national language thus appeared to be closer to the approach taken by the Bible translators than the ideas of “the upholders of the archaic tradition” (who favored using the vocabulary and forms of Church Slavonic as their basis. The language into which the New Testament was translated moved ahead of the literary standard of that period, and that was one of the reasons why the work on the translation of the Bible into the Russian language was halted.
Burtis, M.D. [comp.] [Oak Ridge National Lab., TN (United States). Carbon Dioxide Information Analysis Center; Razuvaev, V.N.; Sivachok, S.G. [All-Russian Research Inst. of Hydrometeorological Information--World Data Center, Obninsk (Russian Federation)
This report presents English-translated abstracts of important Russian-language literature concerning general circulation models as they relate to climate change. Into addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
Razuvaev, V.N.; Ssivachok, S.G. [All-Russian Research Inst. of Hydrometeorological Information-World Data Center, Obninsk (Russian Federation)
This report presents abstracts in Russian and translated into English of important Russian-language literature concerning aerosols as they relate to climate change. In addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
Barker-Plummer, Dave; Dale, Robert; Cox, Richard; Romanczuk, Alex
We have assembled a large corpus of student submissions to an automatic grading system, where the subject matter involves the translation of natural language sentences into propositional logic. Of the 2.3 million translation instances in the corpus, 286,000 (approximately 12%) are categorized as being in error. We want to understand the nature of…
Burtis, M.D. [comp.
This report presents abstracts (translated into English) of important Russian-language literature concerning clouds as they relate to climate change. In addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included to assist the reader in locating abstracts of particular interest.
This report presents abstracts (translated into English) of important Russian-language literature concerning clouds as they relate to climate change. In addition to the bibliographic citations and abstracts translated into English, this report presents the original citations and abstracts in Russian. Author and title indexes are included, to assist the reader in locating abstracts of particular interest.
Haque, R.; Kumar Naskar, S.; Bosch, A.P.J. van den; Way, A.
The translation features typically used in Phrase-Based Statistical Machine Translation (PB-SMT) model dependencies between the source and target phrases, but not among the phrases in the source language themselves. A swathe of research has demonstrated that integrating source context modelling
Full Text Available In the article the question of the existence of the Croatian literary language in the semiotic space, i.e. the system of culture, is taken into consideration. In order to affirm the idea of the justification of the very term Croatian language, and thus acceptance of the thesis of the existence of such a language, this argumentation is directed towards theoretical investigation in the semiotic field. There is an attempt to envisage that discussions in the post-Yugoslav linguistics are not the problem, conventionally speaking, ‘ontological’ but ‘epistemological’. Thus, it is not important the question whether the Croatian language or any other language, e.g. Montenegrin, exists but rather the following question: what does it mean that literary language exists or does not exist?
Bonvillian, John D.; And Others
The relationship between sign language rehearsal and written free recall was examined by having deaf college students rehearse the sign language equivalents of printed English words. Studies of both immediate and delayed memory suggested that word recall increased as a function of total rehearsal frequency and frequency of appearance in rehearsal…
Nelson, Lauri H.; White, Karl R.; Grewe, Jennifer
The development of proficient communication skills in infants and toddlers is an important component to child development. A popular trend gaining national media attention is teaching sign language to babies with normal hearing whose parents also have normal hearing. Thirty-three websites were identified that advocate sign language for hearing…
Al-Amer, Rasmieh; Ramjan, Lucie; Glew, Paul; Darwish, Maram; Salamonson, Yenna
To illuminate translation practice in cross-language interview in health care research and its impact on the construction of the data. Globalisation and changing patterns of migration have created changes to the world's demography; this has presented challenges for overarching social domains, specifically, in the health sector. Providing ethno-cultural health services is a timely and central facet in an ever-increasingly diverse world. Nursing and other health sectors employ cross-language research to provide knowledge and understanding of the needs of minority groups, which underpins cultural-sensitive care services. However, when cultural and linguistic differences exist, they pose unique complexities for cross-cultural health care research; particularly in qualitative research where narrative data are central for communication as most participants prefer to tell their story in their native language. Consequently, translation is often unavoidable in order to make a respondent's narrative vivid and comprehensible, yet, there is no consensus about how researchers should address this vital issue. An integrative literature review. PubMed and CINAHL databases were searched for relevant studies published before January 2014, and hand searched reference lists of studies were selected. This review of cross-language health care studies highlighted three major themes, which identify factors often reported to affect the translation and production of data in cross-language research: (1) translation style; (2) translators; and (3) trustworthiness of the data. A plan detailing the translation process and analysis of health care data must be determined from the study outset to ensure credibility is maintained. A transparent and systematic approach in reporting the translation process not only enhances the integrity of the findings but also provides overall rigour and auditability. It is important that minority groups have a voice in health care research which, if accurately
Prof. Dr. Zeki KARAKAYA
Full Text Available The purpose of this study is to enlighten the definition of translation acquisition, the differentiation from language acquisition by comparing translations of two authors having translation acquisition in different levels and through using these differences how can it be used in language teaching.In the first section, translation acquisition issue has been accentuated; by giving place to the definitions of different Scientifics a general survey about translation acquisition has been elicited. Moreover, in this section, in order to see how translation acquisition will be differentiated from language acquisition and bring out the difference; an application has been conducted on Students and the results have been ascertained.As it is known, translation comparison is a appliance for bringing in language teaching, comparative linguistics, comparative graphology, translation criticism and translation acquisition. However, in this study it has been tried out representing some suggestions and examples about translation on the subject of how can be benefited only in language teaching with translation comparison which is one of the medium of methodology. In the study comparative translation method has been applied and examples and suggestions about its functions on language teaching have been presented. Bu çalışmanın amacı çeviri edincinin tanımına, dil edincinden ayrışımına, farklı düzeylerde çeviri edincine sahip iki yazarın çevirilerinin karşılaştırılmasıyla bu farklılıklardan yararlanarak nasıl dil öğretiminde kullanılabileceğine ışık tutmaktır.İlk bölümde çeviri edinci konusu üzerine durulmuş, farklı bilim adamlarının tanımlarını yer vererek, çeviri edincine yönelik genel bir bakış sağlanmıştır. Ayrıca bu bölümde çeviri edincinin dil edincinden nasıl ayırt edileceğini görmek, aradaki farkı göz önüne sermek için öğrencilerle uygulama yapılmış ve sonuçları tespit edilmi
Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu
Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.
André Nogueira Xavier
Full Text Available According to Xavier (2006, there are signs in the Brazilian sign language (Libras that are typically developed with one hand, while others are made by both hands. However, recent studies document the communication, with both hands, of signs which usually use only one hand, and vice-versa (XAVIER, 2011; XAVIER, 2013; BARBOSA, 2013. This study aims the discussion of 27 Libras' signs which are typically made with one hand and that, when articulated with both hands, present changes in their meanings. The data discussed hereby, even though originally collected from observations of spontaneous signs from different Libras' users, have been elicited by two deaf patients in distinct sessions. After presenting the two forms of the selected signs (made with one and two hands, the patients were asked to create examples of use for each of the signs. The results proved that the duplication of hands, at least for the same signal in some cases, may happen due to different factors (such as plurality, aspect and intensity.
Knapp, Heather Patterson; Corina, David P.
Language is proposed to have developed atop the human analog of the macaque mirror neuron system for action perception and production [Arbib M.A. 2005. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics (with commentaries and author's response). "Behavioral and Brain Sciences, 28", 105-167; Arbib…
Toral Ruiz, Antonio
We present a widely applicable methodology to bring machine translation (MT) to under-resourced languages in a cost-effective and rapid manner. Our proposal relies on web crawling to automatically acquire parallel data to train statistical MT systems if any such data can be found for the language
This article examines how students and teachers at a non-Orthodox Jewish day school in New York City negotiate the use of translation within the context of an institutionalized language policy that stresses the use of a sacred language over that of the vernacular. Specifically, this paper analyzes the negotiation of a Hebrew-only policy through…
Crestani, Anelise Henrich; Moraes, Anaelena Bragança de; Souza, Ana Paula Ramos de
To analyze the results of the validation of building enunciative signs of language acquisition for children aged 3 to 12 months. The signs were built based on mechanisms of language acquisition in an enunciative perspective and on clinical experience with language disorders. The signs were submitted to judgment of clarity and relevance by a sample of six experts, doctors in linguistic in with knowledge of psycholinguistics and language clinic. In the validation of reliability, two judges/evaluators helped to implement the instruments in videos of 20% of the total sample of mother-infant dyads using the inter-evaluator method. The method known as internal consistency was applied to the total sample, which consisted of 94 mother-infant dyads to the contents of the Phase 1 (3-6 months) and 61 mother-infant dyads to the contents of Phase 2 (7 to 12 months). The data were collected through the analysis of mother-infant interaction based on filming of dyads and application of the parameters to be validated according to the child's age. Data were organized in a spreadsheet and then converted to computer applications for statistical analysis. The judgments of clarity/relevance indicated no modifications to be made in the instruments. The reliability test showed an almost perfect agreement between judges (0.8 ≤ Kappa ≥ 1.0); only the item 2 of Phase 1 showed substantial agreement (0.6 ≤ Kappa ≥ 0.79). The internal consistency for Phase 1 had alpha = 0.84, and Phase 2, alpha = 0.74. This demonstrates the reliability of the instruments. The results suggest adequacy as to content validity of the instruments created for both age groups, demonstrating the relevance of the content of enunciative signs of language acquisition.
Full Text Available In both vocal and sign languages, we can distinguish word-, sentence-, and discourse-level integration in terms of hierarchical processes, which integrate various elements into another higher level of constructs. In the present study, we used magnetic resonance imaging and voxel-based morphometry to test three language tasks in Japanese Sign Language (JSL: word-level (Word, sentence-level (Sent, and discourse-level (Disc decision tasks. We analyzed cortical activity and gray matter volumes of Deaf signers, and clarified three major points. First, we found that the activated regions in the frontal language areas gradually expanded in the dorso-ventral axis, corresponding to a difference in linguistic units for the three tasks. Moreover, the activations in each region of the frontal language areas were incrementally modulated with the level of linguistic integration. These dual mechanisms of the frontal language areas may reflect a basic organization principle of hierarchically integrating linguistic information. Secondly, activations in the lateral premotor cortex and inferior frontal gyrus were left-lateralized. Direct comparisons among the language tasks exhibited more focal activation in these regions, suggesting their functional localization. Thirdly, we found significantly positive correlations between individual task performances and gray matter volumes in localized regions, even when the ages of acquisition of JSL and Japanese were factored out. More specifically, correlations with the performances of the Word and Sent tasks were found in the left precentral/postcentral gyrus and insula, respectively, while correlations with those of the Disc task were found in the left ventral inferior frontal gyrus and precuneus. The unification of functional and anatomical studies would thus be fruitful for understanding human language systems from the aspects of both universality and individuality.
Kastner, Itamar; Meir, Irit; Sandler, Wendy; Dachkovsky, Svetlana
This paper introduces data from Kafr Qasem Sign Language (KQSL), an as-yet undescribed sign language, and identifies the earliest indications of embedding in this young language. Using semantic and prosodic criteria, we identify predicates that form a constituent with a noun, functionally modifying it. We analyze these structures as instances of embedded predicates, exhibiting what can be regarded as very early stages in the development of subordinate constructions, and argue that these structures may bear directly on questions about the development of embedding and subordination in language in general. Deutscher (2009) argues persuasively that nominalization of a verb is the first step—and the crucial step—toward syntactic embedding. It has also been suggested that prosodic marking may precede syntactic marking of embedding (Mithun, 2009). However, the relevant data from the stage at which embedding first emerges have not previously been available. KQSL might be the missing piece of the puzzle: a language in which a noun can be modified by an additional predicate, forming a proposition within a proposition, sustained entirely by prosodic means. PMID:24917837
Ashraf, Md Izhar; Sinha, Sitabhra
Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon.
Language, which allows complex ideas to be communicated through symbolic sequences, is a characteristic feature of our species and manifested in a multitude of forms. Using large written corpora for many different languages and scripts, we show that the occurrence probability distributions of signs at the left and right ends of words have a distinct heterogeneous nature. Characterizing this asymmetry using quantitative inequality measures, viz. information entropy and the Gini index, we show that the beginning of a word is less restrictive in sign usage than the end. This property is not simply attributable to the use of common affixes as it is seen even when only word roots are considered. We use the existence of this asymmetry to infer the direction of writing in undeciphered inscriptions that agrees with the archaeological evidence. Unlike traditional investigations of phonotactic constraints which focus on language-specific patterns, our study reveals a property valid across languages and writing systems. As both language and writing are unique aspects of our species, this universal signature may reflect an innate feature of the human cognitive phenomenon. PMID:29342176
de Groot, A.M.B.; Poot, R.
Three groups of 20 unbalanced bilinguals, different from one another in second language (L2) fluency, translated one set of words from L1, Dutch, to L2, English (forward translation), and a second set of matched words from L2 to L1 (backward translation). In both language sets we orthogonally
Full Text Available The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.
BADEA Florina; PETRESCU Ligia
The paper presents general notions about graphical language used in engineering, the international standards used for representing objects and also the most important software applications used in Computer Aided Design for the development of products in engineering.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of…
Full Text Available The cyberspace is populated with valuable information sources, expressed in about 1500 different languages and dialects. Yet, for the vast majority of WEB surfers this wealth of information is practically inaccessible or meaningless. Recent advancements in cross-lingual information retrieval, multilingual summarization, cross-lingual question answering and machine translation promise to narrow the linguistic gaps and lower the communication barriers between humans and/or software agents. Most of these language technologies are based on statistical machine learning techniques which require large volumes of cross lingual data. The most adequate type of cross-lingual data is represented by parallel corpora, collection of reciprocal translations. However, it is not easy to find enough parallel data for any language pair might be of interest. When required parallel data refers to specialized (narrow domains, the scarcity of data becomes even more acute. Intelligent information extraction techniques from comparable corpora provide one of the possible answers to this lack of translation data.
Kontra, Edit H.; Csizer, Kata
The aim of this study is to point out the relationship between foreign language learning motivation and sign language use among hearing impaired Hungarians. In the article we concentrate on two main issues: first, to what extent hearing impaired people are motivated to learn foreign languages in a European context; second, to what extent sign…
This article describes a series of exploratory L1 to L2 dubbing projects for which students translated and used editing software to dub short American film and TV clips into their target language. Translating and dubbing into the target language involve students in multifaceted, high-level language production tasks that lead to enhanced vocabulary…
This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.
Full Text Available Background and Aim : Learning and memory are two high level cognitive performances in human that hearing loss influences them. In our study, mini-mental state examination (MMSE and Ray auditory-verbal learning test (RAVLT was conducted to study cognitive stat us and lexical learning and memory in deaf adults using sign language. Methods: This cross-sectional comparative study was conducted on 30 available congenitally deaf adults using sign language in Persian and 46 normal adults aged 19 to 27 years for both sexes, with a minimum of diploma level of education. After mini-mental state examination, Rey auditory-verbal learning test was run through computers to evaluate lexical learning and memory with visual presentation. Results: Mean scores of mini-mental state examination and Rey auditory-verbal learning test in congenitally deaf adults were significantly lower than normal individuals in all scores (p=0.018 except in the two parts of the Rey test. Significant correlation was found between results of two tests just in the normal group (p=0.043. Gender had no effect on test results. Conclusion: Cognitive status and lexical memory and learning in congenitally deaf individuals is weaker than in normal subjects. It seems that using sign language as the main way of communication in deaf people causes poor lexical memory and learning.
Manoranjan, M D; Robinson, J A
Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.
Carolina Hessel Silveira
Full Text Available The paper, which provides partial results of a master’s dissertation, has sought to give contribute Sign Language curriculum in the deaf schooling. We began to understand the importance of sign languages for deaf people’s development and found out that a large part of the deaf are from hearing parents, which emphasises the significance of teaching LIBRAS (Brazilian Sign Language in schools for the deaf. We should also consider the importance of this study in building deaf identities and strengthening the deaf culture. We have obtained the theoretical basis in the so-called Deaf Studies and some experts in the curriculum theories. The main objective for this study has been to conduct an analysis of the LIBRAS curriculum at work in schools for the deaf in Rio Grande do Sul, Brazil. The curriculum analysis has shown a degree of diversity: in some curricula, content from one year is repeated in the next one with no articulation. In others, one can find preoccupation for issues of deaf identity and culture, but some of them include contents that are not related to LIBRAS, or the deaf culture, but rather to discipline for the deaf in general. By providing positive and negative aspects, the analysis data may help in discussions about difficulties, progress and problems in LIBRAS teacher education for deaf students.
Wijayanti Nurul Khotimah
Full Text Available Sign Language recognition was used to help people with normal hearing communicate effectively with the deaf and hearing-impaired. Based on survey that conducted by Multi-Center Study in Southeast Asia, Indonesia was on the top four position in number of patients with hearing disability (4.6%. Therefore, the existence of Sign Language recognition is important. Some research has been conducted on this field. Many neural network types had been used for recognizing many kinds of sign languages. However, their performance are need to be improved. This work focuses on the ASL (Alphabet Sign Language in SIBI (Sign System of Indonesian Language which uses one hand and 26 gestures. Here, thirty four features were extracted by using Leap Motion. Further, a new method, Rule Based-Backpropagation Genetic Al-gorithm Neural Network (RB-BPGANN, was used to recognize these Sign Languages. This method is combination of Rule and Back Propagation Neural Network (BPGANN. Based on experiment this pro-posed application can recognize Sign Language up to 93.8% accuracy. It was very good to recognize large multiclass instance and can be solution of overfitting problem in Neural Network algorithm.
Lieberman, Amy M
Visual attention is a necessary prerequisite to successful communication in sign language. The current study investigated the development of attention-getting skills in deaf native-signing children during interactions with peers and teachers. Seven deaf children (aged 21-39 months) and five adults were videotaped during classroom activities for approximately 30 hr. Interactions were analyzed in depth to determine how children obtained and maintained attention. Contrary to previous reports, children were found to possess a high level of communicative competence from an early age. Analysis of peer interactions revealed that children used a range of behaviors to obtain attention with peers, including taps, waves, objects, and signs. Initiations were successful approximately 65% of the time. Children followed up failed initiation attempts by repeating the initiation, using a new initiation, or terminating the interaction. Older children engaged in longer and more complex interactions than younger children. Children's early exposure to and proficiency in American Sign Language is proposed as a likely mechanism that facilitated their communicative competence.
Maller, S; Singleton, J; Supalla, S; Wix, T
We describe the procedures for constructing an instrument designed to evaluate children's proficiency in American Sign Language (ASL). The American Sign Language Proficiency Assessment (ASL-PA) is a much-needed tool that potentially could be used by researchers, language specialists, and qualified school personnel. A half-hour ASL sample is collected on video from a target child (between ages 6 and 12) across three separate discourse settings and is later analyzed and scored by an assessor who is highly proficient in ASL. After the child's language sample is scored, he or she can be assigned an ASL proficiency rating of Level 1, 2, or 3. At this phase in its development, substantial evidence of reliability and validity has been obtained for the ASL-PA using a sample of 80 profoundly deaf children (ages 6-12) of varying ASL skill levels. The article first explains the item development and administration of the ASL-PA instrument, then describes the empirical item analysis, standard setting procedures, and evidence of reliability and validity. The ASL-PA is a promising instrument for assessing elementary school-age children's ASL proficiency. Plans for further development are also discussed.
Full Text Available This article aims to broaden the discussion on verbal-visual utterances, reflecting upon theoretical assumptions of the Bakhtin Circle that can reinforce the argument that the utterances of a language that employs a visual-gestural modality convey plastic-pictorial and spatial values of signs also through non-manual markers (NMMs. This research highlights the difference between affective expressions, which are paralinguistic communications that may complement an utterance, and verbal-visual grammatical markers, which are linguistic because they are part of the architecture of phonological, morphological, syntactic-semantic and discursive levels in a particular language. These markers will be described, taking the Brazilian Sign Language–Libras as a starting point, thereby including this language in discussions of verbal-visual discourse when investigating the need to do research on this discourse also in the linguistic analyses of oral-auditory modality languages, including Transliguistics as an area of knowledge that analyzes discourse, focusing upon the verbal-visual markers used by the subjects in their utterance acts.
Full Text Available Translating figurative language involves more than just replacing the figurative language with its equivalent in the target language. Therefore, it is not surprising for the translation of figurative language to have its own set of challenges. Problems the translator faces in translating the Malay Figurative Language into English include complexities in understanding, interpreting and recreating the Figurative language that are unique in the Source Language (SL culture; which have to be explained and described in Target Language (TL where such practices and customs are non - existent. Secondly, the Source Text (ST figurative language may appear in a variety of types and have a distinct denotative and connotative meaning and reference; most often, it is difficult to find an equivalent which totally matches the original meaning or concept. This particular paper analyses the translation of figurative language extracted from UniMAP's Vice Chancellor Keynote Speech in 2015. Findings reveal that the three categories of figurative language identified were namely idioms, metaphors and similes. Translation strategies used are either not translated, paraphrased or translated with a similar meaning but in different form.
Mateer, C A; Rapport, R L; Kettrick, C
A normally hearing left-handed patient familiar with American Sign Language (ASL) was assessed under sodium amytal conditions and with left cortical stimulation in both oral speech and signed English. Lateralization was mixed but complementary in each language mode: the right hemisphere perfusion severely disrupted motoric aspects of both types of language expression, the left hemisphere perfusion specifically disrupted features of grammatical and semantic usage in each mode of expression. Both semantic and syntactic aspects of oral and signed responses were altered during left posterior temporal-parietal stimulation. Findings are discussed in terms of the neurological organization of ASL and linguistic organization in cases of early left hemisphere damage.
Mehri, Ali; Jamaati, Maryam
Zipf's law, as a power-law regularity, confirms long-range correlations between the elements in natural and artificial systems. In this article, this law is evaluated for one hundred live languages. We calculate Zipf's exponent for translations of the holy Bible to several languages, for this purpose. The results show that, the average of Zipf's exponent in studied texts is slightly above unity. All studied languages in some families have Zipf's exponent lower/higher than unity. It seems that geographical distribution impresses the communication between speakers of different languages in a language family, and affect similarity between their Zipf's exponent. The Bible has unique concept regardless of its language, but the discrepancy in grammatical rules and syntactic regularities in applying stop words to make sentences and imply a certain concept, lead to difference in Zipf's exponent for various languages.
Perniss, Pamela; Lu, Jenny C.; Morgan, Gary; Vigliocco, Gabriella
Most research on the mechanisms underlying referential mapping has assumed that learning occurs in ostensive contexts, where label and referent co-occur, and that form and meaning are linked by arbitrary convention alone. In the present study, we focus on "iconicity" in language, that is, resemblance relationships between form and…
Taylor, Randolph S; Francis, Wendy S
Previous literature has demonstrated conceptual repetition priming across languages in bilinguals. This between-language priming effect is taken as evidence that translation equivalents have shared conceptual representations across languages. However, the vast majority of this research has been conducted using only concrete nouns as stimuli. The present experiment examined conceptual repetition priming within and between languages in adjectives, a part of speech not previously investigated in studies of bilingual conceptual representation. The participants were 100 Spanish-English bilinguals who had regular exposure to both languages. At encoding, participants performed a shallow processing task and a deep-processing task on English and Spanish adjectives. At test, they performed an antonym-generation task in English, in which the target responses were either adjectives presented at encoding or control adjectives not previously presented. The measure of priming was the response time advantage for producing repeated adjectives relative to control adjectives. Significant repetition priming was observed both within and between languages under deep, but not shallow, encoding conditions. The results indicate that the conceptual representations of adjective translation equivalents are shared across languages.
Schoonbaert, Sofie; Duyck, Wouter; Brysbaert, Marc; Hartsuiker, Robert
The present study investigated cross-language priming effects with unique noncognate translation pairs. Unbalanced Dutch (first language [L1])-English (second language [L2]) bilinguals performed a lexical decision task in a masked priming paradigm. The results of two experiments showed significant translation priming from L I to L2 (meisje-GIRL) and from L2 to L I (girl-MEISJE), using two different stimulus onset asynchronies (SOAs) (250 and 100 msec). Although translation priming from L I to...
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Courtin, C.; Herve, P. -Y.; Petit, L.; Zago, L.; Vigneau, M.; Beaucousin, V.; Jobard, G.; Mazoyer, B.; Mellet, E.; Tzourio-Mazoyer, N.
"Highly iconic" structures in Sign Language enable a narrator to act, switch characters, describe objects, or report actions in four-dimensions. This group of linguistic structures has no real spoken-language equivalent. Topographical descriptions are also achieved in a sign-language specific manner via the use of signing-space and…
Starlander, Marianne; Bouillon, Pierrette; Rayner, Manny; Chatzichrisafis, Nikos; Hockey, Beth Ann; Isahara, Hitoshi; Kanzaki, Kyoko; Nakao, Yukie; Santaholma, Marianne
In this paper, we describe and evaluate an Open Source medical speech translation system (MedSLT) intended for safety-critical applications. The aim of this system is to eliminate the language barriers in emergency situation. It translates spoken questions from English into French, Japanese and Finnish in three medical subdomains (headache, chest pain and abdominal pain), using a vocabulary of about 250-400 words per sub-domain. The architecture is a compromise between fixed-phrase translation on one hand and complex linguistically-based systems on the other. Recognition is guided by a Context Free Grammar Language Model compiled from a general unification grammar, automatically specialised for the domain. We present an evaluation of this initial prototype that shows the advantages of this grammar-based approach for this particular translation task in term of both reliability and use.
typescript . Can read either representations of familiar formulaic verbal exchanges or simple language containing only the highest frequency...comprehension to read simple, authentic written material in a form equivalent to usual printing or typescript on subjects within a familiar context
Full Text Available Dr. Paweł Rutkowski is head of the Section for Sign Linguistics at the University of Warsaw. He is a general linguist and a specialist in the field of syntax of natural languages, carrying out research on Polish Sign Language (polski język migowy — PJM. He has been awarded a number of prizes, grants and scholarships by such institutions as the Foundation for Polish Science, Polish Ministry of Science and Higher Education, National Science Centre, Poland, Polish–U.S. Fulbright Commission, Kosciuszko Foundation and DAAD. Dr. Rutkowski leads the team developing the Corpus of Polish Sign Language and the Corpus-based Dictionary of Polish Sign Language, the first dictionary of this language prepared in compliance with modern lexicographical standards. The dictionary is an open-access publication, available freely at the following address: http://www.slownikpjm.uw.edu.pl/en/. This interview took place at eLex 2017, a biennial conference on electronic lexicography, where Dr. Rutkowski was awarded the Adam Kilgarriff Prize and gave a keynote address entitled Sign language as a challenge to electronic lexicography: The Corpus-based Dictionary of Polish Sign Language and beyond. The interview was conducted by Dr. Victoria Nyst from Leiden University, Faculty of Humanities, and Dr. Iztok Kosem from the University of Ljubljana, Faculty of Arts.
Full Text Available Practices of other-initiated repair deal with problems of hearing or understanding what another person has said in the fast-moving turn-by-turn flow of conversation. As such, other-initiated repair plays a fundamental role in the maintenance of intersubjectivity in social interaction. This study finds and analyses a special type of other-initiated repair that is used in turn-by-turn conversation in a sign language: Argentine Sign Language (Lengua de Señas Argentina or LSA. We describe a type of response termed a ‘freeze-look’, which occurs when a person has just been asked a direct question: instead of answering the question in the next turn position, the person holds still while looking directly at the questioner. In these cases it is clear that the person is aware of having just been addressed and is not otherwise accounting for their delay in responding (e.g., by displaying a ‘thinking’ face or hesitation, etc.. We find that this behavior functions as a way for an addressee to initiate repair by the person who asked the question. The ‘freeze-look’ results in the questioner ‘re-doing’ their action of asking a question, for example by repeating or rephrasing it. Thus we argue that the ‘freeze-look’ is a practice for other-initiation of repair. In addition, we argue that it is an ‘off-record’ practice, thus contrasting with known on-record practices such as saying ‘Huh?’ or equivalents. The findings aim to contribute to research on human understanding in everyday turn-by-turn conversation by looking at an understudied sign language, with possible implications for our understanding of visual bodily communication in spoken languages as well.
Cormier, Kearsy; Schembri, Adam; Vinson, David; Orfanidou, Eleni
Age of acquisition (AoA) effects have been used to support the notion of a critical period for first language acquisition. In this study, we examine AoA effects in deaf British Sign Language (BSL) users via a grammaticality judgment task. When English reading performance and nonverbal IQ are factored out, results show that accuracy of grammaticality judgement decreases as AoA increases, until around age 8, thus showing the unique effect of AoA on grammatical judgement in early learners. No such effects were found in those who acquired BSL after age 8. These late learners appear to have first language proficiency in English instead, which may have been used to scaffold learning of BSL as a second language later in life. Copyright © 2012 Elsevier B.V. All rights reserved.
Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose
Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.
The aim of this study was to show how to verify plagiarism of the paper written in Macedonian and translated in foreign language. Original article "Ethics in Medical Research Involving Human Subjects", written in Macedonian, was submitted as an assay-2 for the subject Ethics and published by Ilina Stefanovska, PhD candidate from the Iustinianus Primus Faculty of Law, Ss Cyril and Methodius University of Skopje (UKIM), Skopje, Republic of Macedonia in Fabruary, 2013. Suspected article for plagiarism was published by Prof. Dr. Gordana Panova from the Faculty of Medical Sciences, University Goce Delchev, Shtip, Republic of Macedonia in English with the identical title and identical content in International scientific on-line journal "SCIENCE & TECHNOLOGIES", Publisher "Union of Scientists - Stara Zagora". Original document (written in Macedonian) was translated with Google Translator; suspected article (published in English pdf file) was converted into Word document, and compared both documents with several programs for plagiarism detection. It was found that both documents are identical in 71%, 78% and 82%, respectively, depending on the computer program used for plagiarism detection. It was obvious that original paper was entirely plagiarised by Prof. Dr. Gordana Panova, including six references from the original paper. Plagiarism of the original papers written in Macedonian and translated in other languages can be verified after computerised translation in other languages. Later on, original and translated documents can be compared with available software for plagiarism detection.
Emmorey, Karen; Thompson, Robin; Colvin, Rachael
An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.
Yang, Ruiduo; Sarkar, Sudeep; Loeding, Barbara
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested version of the dynamic programming approach. To address movement epenthesis, a dynamic programming (DP) process employs a virtual me option that does not need explicit models. We call this the enhanced level building (eLB) algorithm. This formulation also allows the incorporation of grammar models. Nested within this eLB is another DP that handles the problem of selecting among multiple hand candidates. We demonstrate our ideas on four American Sign Language data sets with simple background, with the signer wearing short sleeves, with complex background, and across signers. We compared the performance with Conditional Random Fields (CRF) and Latent Dynamic-CRF-based approaches. The experiments show more than 40 percent improvement over CRF or LDCRF approaches in terms of the frame labeling rate. We show the flexibility of our approach when handling a changing context. We also find a 70 percent improvement in sign recognition rate over the unenhanced DP matching algorithm that does not accommodate the me effect.
Lilian Hoffecker, PhD, MLS
Conclusions: Language translation can be a difficult and time-consuming task. However, translation of a conference slide presentation with limited text is an achievable activity and engages an international audience for information that is often not noticed or lost. Although English is by far the primary language of science and other disciplines, it is not necessarily the first or preferred language of global researchers. By offering appropriate language versions, the authors of presentations can expand the reach of their work.
Chemical compound names remain the primary method for conveying molecular structures between chemists and researchers. In research articles, patents, chemical catalogues, government legislation, and textbooks, the use of IUPAC and traditional compound names is universal, despite efforts to introduce more machine-friendly representations such as identifiers and line notations. Fortunately, advances in computing power now allow chemical names to be parsed and generated (read and written) with almost the same ease as conventional connection tables. A significant complication, however, is that although the vast majority of chemistry uses English nomenclature, a significant fraction is in other languages. This complicates the task of filing and analyzing chemical patents, purchasing from compound vendors, and text mining research articles or Web pages. We describe some issues with manipulating chemical names in various languages, including British, American, German, Japanese, Chinese, Spanish, Swedish, Polish, and Hungarian, and describe the current state-of-the-art in software tools to simplify the process. PMID:19239237
Olson, Andrea M; Swabey, Laurie
Despite federal laws that mandate equal access and communication in all healthcare settings for deaf people, consistent provision of quality interpreting in healthcare settings is still not a reality, as recognized by deaf people and American Sign Language (ASL)-English interpreters. The purpose of this study was to better understand the work of ASL interpreters employed in healthcare settings, which can then inform on training and credentialing of interpreters, with the ultimate aim of improving the quality of healthcare and communication access for deaf people. Based on job analysis, researchers designed an online survey with 167 task statements representing 44 categories. American Sign Language interpreters (N = 339) rated the importance of, and frequency with which they performed, each of the 167 tasks. Categories with the highest average importance ratings included language and interpreting, situation assessment, ethical and professional decision making, manage the discourse, monitor, manage and/or coordinate appointments. Categories with the highest average frequency ratings included the following: dress appropriately, adapt to a variety of physical settings and locations, adapt to working with variety of providers in variety of roles, deal with uncertain and unpredictable work situations, and demonstrate cultural adaptability. To achieve health equity for the deaf community, the training and credentialing of interpreters needs to be systematically addressed.
Full Text Available Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain’s anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia. We report the first investigation of the task-negative network in Deaf signers and its functional connectivity—the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG, but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.
Full Text Available This paper investigates the interplay of constructed action and the clause in Finnish Sign Language (FinSL. Constructed action is a form of gestural enactment in which the signers use their hands, face and other parts of the body to represent the actions, thoughts or feelings of someone they are referring to in the discourse. With the help of frequencies calculated from corpus data, this article shows firstly that when FinSL signers are narrating a story, there are differences in how they use constructed action. Then the paper argues that there are differences also in the prototypical structure, linkage type and non-manual activity of clauses, depending on the presence or non-presence of constructed action. Finally, taking the view that gesturality is an integral part of language, the paper discusses the nature of syntax in sign languages and proposes a conceptualization in which syntax is seen as a set of norms distributed on a continuum between a categorial-conventional end and a gradient-unconventional end.
“I wonder if you could tell me the meaning of an obscure Finnish word I’ve got here? I think it’s perhaps something to do with fishing tackle”, goes one of the lines in a fantastic information movie called “The League at Work”, produced by the League itself in 1935. The movie, introducing...... the viewers to all the aspects of the Secretariat, gives the Translation and Interpretation Service a rather prominent place as the first in line. And no wonder: for it was one of the peculiar procurements of the Secretariat – indeed, one of the features that made it international in all its doings....
Mendoza, Mary Elizabeth
In the course of their work, sign language interpreters are faced with ethical dilemmas that require prioritizing competing moral beliefs and views on professional practice. There are several decision-making models, however, little research has been done on how sign language interpreters learn to identify and make ethical decisions. Through surveys and interviews on ethical decision-making, this study investigates how expert and novice interpreters discuss their ethical decision-making proces...
Corina, David P; Knapp, Heather Patterson
In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.
Full Text Available This research work aims in developing Tamil to English Cross - language text retrieval system using hybrid machine translation approach. The hybrid machine translation system is a combination of rule based and statistical based approaches. In an existing word by word translation system there are lot of issues and some of them are ambiguity, Out-of-Vocabulary words, word inflections, and improper sentence structure. To handle these issues, proposed architecture is designed in such a way that, it contains Improved Part-of-Speech tagger, machine learning based morphological analyser, collocation based word sense disambiguation procedure, semantic dictionary, and tense markers with gerund ending rules, and two pass transliteration algorithm. From the experimental results it is clear that the proposed Tamil Query based translation system achieves significantly better translation quality over existing system, and reaches 95.88% of monolingual performance.
Pa, Judy; Wilson, Stephen M; Pickell, Herbert; Bellugi, Ursula; Hickok, Gregory
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Full Text Available In addition to affecting the Slovene education system, the Austrian denationalising policy in the second half of the 19th century had a direct impact on translation. Most of the already scarce Slovene philologists were appointed to posts outside the Slovene national territory. The conditions only began to improve in the 1860s, with the translation activity taken up by the first students of the newly established philology courses at the University of Vienna (Ladislav Hrovat, Matija Valjavec, etc.. More often than not, however, the translators were not philologists. The first longer classical texts published in Slovene were individual books of the Homeric epics, Xenophon’s Memorabilia, Plato’s dialogues Apology and Crito, Virgil's Georgics, and Sophocles' Ajax (the complete Bible, of course, had been translated much earlier, but it holds a special place in the history of translation. The translations published as books represent the first Slovene book-format editions of the ancient classics, but most appeared in magazines and newspapers . Many translations met with the same fate as a number of contemporary Slovene classical-language textbooks: they remained in manuscript because of insufficient funds (the publishers were unwilling to run the risk of such enterprises, for fear that their investment would not pay, and also because of the national-awakening emphasis on Slovene, which was accompanied by a preference for translating from other modern languages, particularly Slavic ones. A noteworthy example of these unpublished translations is Caesar’s De Bello Gallico as prepared by the Franciscan Ladislav Hrovat. From the beginnings to the present, Slovene translations of the Greek and Latin classics have displayed a marked predominance of poetry, with prose works remaining in the minority.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…
Hansen, Eric G; Loew, Ruth C; Laitusis, Cara C; Kushalnagar, Poorna; Pagliaro, Claudia M; Kurz, Christopher
There is considerable interest in determining whether high-quality American Sign Language videos can be used as an accommodation in tests of mathematics at both K-12 and postsecondary levels; and in learning more about the usability (e.g., comprehensibility) of ASL videos with two different types of signers - avatar (animated figure) and human. The researchers describe the results of administering each of nine pre-college mathematics items in both avatar and human versions to each of 31 Deaf participants with high school and post-high school backgrounds. This study differed from earlier studies by obliging the participants to rely on the ASL videos to answer the items. While participants preferred the human version over the avatar version (apparently due largely to the better expressiveness and fluency of the human), there was no discernible relationship between mathematics performance and signed version.
Full Text Available In this paper the use and quality of the evaluative language produced by a bilingual child in a story-telling situation is analysed. The subject, an 11-year-old Finnish boy, Jimmy, is bilingual in Finnish sign language (FinSL and spoken Finnish.He was born deaf but got a cochlear implant at the age of five.The data consist of a spoken and a signed version of “The Frog Story”. The analysis shows that evaluative devices and expressions differ in the spoken and signed stories told by the child. In his Finnish story he uses mostly lexical devices – comments on a character and the character’s actions as well as quoted speech occasionally combined with prosodic features. In his FinSL story he uses both lexical and paralinguistic devices in a balanced way.
Vilic, Adnan; Petersen, John Asger; Hoppe, Karsten
This paper presents a data-driven approach to graphically presenting text-based patient journals while still maintaining all textual information. The system first creates a timeline representation of a patients’ physiological condition during an admission, which is assessed by electronically...... monitoring vital signs and then combining these into Early Warning Scores (EWS). Hereafter, techniques from Natural Language Processing (NLP) are applied on the existing patient journal to extract all entries. Finally, the two methods are combined into an interactive timeline featuring the ability to see...... drastic changes in the patients’ health, and thereby enabling staff to see where in the journal critical events have taken place....
Ryumin, D.; Karpov, A. A.
In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.
Aleksandra KAROVSKA RISTOVSKA
Aleksandra Karovska Ristovska, M.A. in special education and rehabilitation sciences, defended her doctoral thesis on 9 of March 2014 at the Institute of Special Education and Rehabilitation, Faculty of Philosophy, University “Ss. Cyril and Methodius”- Skopje in front of the commission composed of: Prof. Zora Jachova, PhD; Prof. Jasmina Kovachevikj, PhD; Prof. Ljudmil Spasov, PhD; Prof. Goran Ajdinski, PhD; Prof. Daniela Dimitrova Radojicikj, PhD. The Macedonian Sign Language is a natural ...
Full Text Available In the history of foreign language teaching translation has alternately been praised and condemned. Unfortunately, the praise and condemnation were based on a rather simplistic, biased, and extreme view of the role of translation. In this view no clear, explicit distinction was made between translation as a means and as an end although in practice people already showed a tendency to be more concerned with one aspect than the other. Moreover, in their treatment of translation people tended to take an â€œeither â€¦ or â€¦â€ position. Either take it or leave it. This paper proposes a more explicit, balanced, and moderate attitude towards translation and its two aspects. It is suggested that a clear distinction be made between translation as a means and as an end and that each be treated accordingly in a better programmed way. The treatment should consider the level of instruction. At the beginning level translation should be treated more as a means than as an end. Gradually, as the level of instruction progresses the role of translation as a means is reduced, while its role as an end is increased so that at the more advanced levels translation will be treated more as an end than as a means. Accordingly, translation should not be totally abandoned or too liberally used. However, the use and disuse should be based on a careful and well-prepared program. In line with the idea that translation be treated as an end at the more advanced level, and considering its importance for a developing nation, it is also proposed here that translating be adopted as a â€œfifth skillâ€ to be pursued.
Rereading Concepts Related To Astronomy In Brazilian Sign Language Dictionaries: Implications For Translation/ Interpretation [releitura De Conceitos Relacionados à Astronomia Presentes Nos Dicionários De Libras: Implicações Para Interpretação/tradução
Alves F.S.; Peixoto D.E.; Lippe E.M.O.
Ten years after the enactment of the 10.436/2002 law, it is clear that the presence of deaf students in regular classrooms has become a reality. After the regulation of the Federal decree n°5.626/2005, advances can be noted regarding access and inclusion of deaf students in educational environments, consolidating historical efforts by the deaf community. As a result, there is a demand for pursuing new strategies for teaching and for learning assessment. However, we have noticed that many sign...
Li, Qiang; Xia, Shuang; Zhao, Fei; Qi, Ji
The purpose of this study was to assess functional changes in the cerebral cortex in people with different sign language experience and hearing status whilst observing and imitating Chinese Sign Language (CSL) using functional magnetic resonance imaging (fMRI). 50 participants took part in the study, and were divided into four groups according to their hearing status and experience of using sign language: prelingual deafness signer group (PDS), normal hearing non-signer group (HnS), native signer group with normal hearing (HNS), and acquired signer group with normal hearing (HLS). fMRI images were scanned from all subjects when they performed block-designed tasks that involved observing and imitating sign language stimuli. Nine activation areas were found in response to undertaking either observation or imitation CSL tasks and three activated areas were found only when undertaking the imitation task. Of those, the PDS group had significantly greater activation areas in terms of the cluster size of the activated voxels in the bilateral superior parietal lobule, cuneate lobe and lingual gyrus in response to undertaking either the observation or the imitation CSL task than the HnS, HNS and HLS groups. The PDS group also showed significantly greater activation in the bilateral inferior frontal gyrus which was also found in the HNS or the HLS groups but not in the HnS group. This indicates that deaf signers have better sign language proficiency, because they engage more actively with the phonetic and semantic elements. In addition, the activations of the bilateral superior temporal gyrus and inferior parietal lobule were only found in the PDS group and HNS group, and not in the other two groups, which indicates that the area for sign language processing appears to be sensitive to the age of language acquisition. After reading this article, readers will be able to: discuss the relationship between sign language and its neural mechanisms. Copyright © 2014 Elsevier Inc
Full Text Available This paper focuses on the topic of censorship associated with the use of strong language and swear words in the translation of contemporary American TV series. In AVT, more specifically in Italian dubbing, the practice of censorship, in the form of suppression or toning down of what might be perceived as offensive, disturbing, too explicit or inconvenient, still remains a problematic issue. By focusing on two recent successful TV series - Girls and Orange is the New Black – which are characterized by the use of strong language (swear words, politically incorrect references and the presence of taboo subjects (homosexuality, sex, drugs, violence – this study will consider the different translation choices applied in dubbing and fansubbing. Previous academic studies have underlined the fact that professional translators tend to remove, more or less consciously, the disturbing elements from the source text, while fansubbers try to adhere as much as possible to the original text, not only in terms of linguistic contents but also in terms of register and style. The results of this analysis seem on the one hand to confirm that there is still not a systematic set of rules that govern the translation of strong language in dubbing, and on the other to indicate that the gap between professional and amateur translation is perhaps becoming less pronounced.
Valeeva, Roza A.; Martynova, Irina N.
The importance of the problem under discussion is preconditioned by the scientific inquiry of the best variants of foreign language inclusions translation which would suite original narration in the source text stylistically, emotionally and conceptually and also fully projects the author's communicative intention in every particular case. The…
Valdecy Oliveira Pontes
Full Text Available In the context of the approach of the linguistic variation of Spanish and the use of Functionalist Translation in Foreign Language classes, this article aims to report the results of the application of a Didactic Sequence (SD, in the style of the Geneva School, Hispanic plays for the teaching of linguistic variation in the pronominal treatment forms of the Spanish-Portuguese Brazilian language pair. SD was applied in the subject "Introduction to Translation Studies in Spanish Language" (2nd semester, offered by the course in Letters - Spanish Language and its Literatures, of the Federal University of Ceará. This article was based on the theoretical foundations of Functionalist Translation (NORD, 1994, 1996, 2009, 2012, Translation and Sociolinguistics (BOLAÑOS-CUELLAR, 2000; MAYORAL, 1998, elaboration of SD (DOLZ; NOVERRAZ; SCHNEUWLY, 2004; CRISTÓVÃO, 2010; BARROS, 2012 and research on the variation in the forms of treatment of Spanish and Portuguese (FONTANELLA DE WEINBER, 1999; SCHERRE et al, 2015.
At the department of foreign language teaching, a variety of courses are offered in order for students to acquire translation competence. The courses are often carried out by translating a text from one language into the other. Learning by experience is an effective approach. However, it is inevitable that there are some aspects that we need to…
White, Kelsey D.; Heidrich, Emily
Most educators are aware that some students utilize web-based machine translators for foreign language assignments, however, little research has been done to determine how and why students utilize these programs, or what the implications are for language learning and teaching. In this mixed-methods study we utilized surveys, a translation task,…
Full Text Available The present study was an attempt to investigate the effect of reading Persian literary texts on the quality of literary translations. To this end, 52 students majoring in English translation were randomly assigned to two groups. A Comprehensive English Language Test (CELT was administered to check the homogeneity of the participants. The treatment for the experimental group consisted of reading 60 Persian short stories and poems. In the meantime, the control group went through their ordinary course curriculum. Both groups were asked to translate extracts of two short stories. The translations were then rated. Through statistical analysis, it was revealed that reading Persian literary works, indeed, improves the quality of literary translations. Therefore, to promote a more fruitful instruction on literary translation, it is suggested that translation teachers attempt to consider reading Persian literary works as part of the curriculum and ask students to read Persian texts to the extent possible, so that more qualified translations would be rendered in the area of literature.
Asmuri, Siti Noraini; Brown, Ted; Broom, Lisa J
Valid translations of time use scales are needed by occupational therapists for use in different cross-cultural contexts to gather relevant data to inform practice and research. The purpose of this study was to describe the process of translating, adapting, and validating the Time Use Diary from its current English language edition into a Malay language version. Five steps of the cross-cultural adaptation process were completed: (i) translation from English into the Malay language by a qualified translator, (ii) synthesis of the translated Malay version, (iii) backtranslation from Malay to English by three bilingual speakers, (iv) expert committee review and discussion, and (v) pilot testing of the Malay language version with two participant groups. The translated version was found to be a reliable and valid tool identifying changes and potential challenges in the time use of older adults. This provides Malaysian occupational therapists with a useful tool for gathering time use data in practice settings and for research purposes.
Full Text Available Until now, several branches of research have fundamentally contributed to a better understanding of the ramifications of bilingualism, multilingualism, and language expertise on psycholinguistic-, cognitive-, and neural implications. In this context, it is noteworthy to mention that from a cognitive perspective, there is a strong convergence of data pointing to an influence of multilingual speech competence on a variety of cognitive functions, including attention, short-term- and working memory, set shifting, switching, and inhibition. In addition, complementary neuroimaging findings have highlighted a specific set of cortical and subcortical brain regions which fundamentally contribute to administrate cognitive control in the multilingual brain, namely Broca’s area, the middle-anterior cingulate cortex, the inferior parietal lobe, and the basal ganglia. However, a disadvantage of focusing on group analyses is that this procedure only enables an approximation of the neural networks shared within a population while at the same time smoothing inter-individual differences. In order to address both commonalities (i.e., within group analyses and inter-individual variability (i.e., single-subject analyses in language control mechanisms, here I measured five professional simultaneous interpreters while the participants overtly translated or repeated sentences with a simple subject-verb-object structure. Results demonstrated that pars triangularis was commonly activated across participants during backward translation (i.e., from L2 to L1, whereas the other brain regions of the control network showed a strong inter-individual variability during both backward and forward (i.e., from L1 to L2 translation. Thus, I propose that pars triangularis plays a crucial role within the language-control network and behaves as a fundamental processing entity supporting simultaneous language translation.
Kim, Kyung-Won; Lee, Mi-So; Soon, Bo-Ram; Ryu, Mun-Ho; Kim, Je-Nam
Communication between people with normal hearing and hearing impairment is difficult. Recently, a variety of studies on sign language recognition have presented benefits from the development of information technology. This study presents a sign language recognition system using a data glove composed of 3-axis accelerometers, magnetometers, and gyroscopes. Each data obtained by the data glove is transmitted to a host application (implemented in a Window program on a PC). Next, the data is converted into angle data, and the angle information is displayed on the host application and verified by outputting three-dimensional models to the display. An experiment was performed with five subjects, three females and two males, and a performance set comprising numbers from one to nine was repeated five times. The system achieves a 99.26% movement detection rate, and approximately 98% recognition rate for each finger's state. The proposed system is expected to be a more portable and useful system when this algorithm is applied to smartphone applications for use in some situations such as in emergencies.
Full Text Available We present results from a qualitative evaluation of the Spanish-language version of a dietary intake questionnaire and characterize the types of findings which emerged from several rounds of cognitive testing. Cognitive interviews were used to test the Spanish translation of the National Health Interview Survey (NHIS Cancer Control Supplement dietary questions, with 36 Spanish-speaking and 9 English-speaking participants. Analyses of the results identified (a translation issues, (b culture-specific issues, and (c general design issues that affected both English and Spanish speakers. Results indicated that general design-oriented difficulties were particularly frequent. Our findings suggest that when appropriately structured, cognitive interviews that feature flexible probing can be useful for identifying a range of problems in survey translations, even after translations have been developed using currently accepted methods. We make several recommendations concerning practices that may be optimal in the conduct of empirical cross-cultural questionnaire evaluations.
Mahalingam, Shenbagavalli; Boominathan, Prakash; Subramaniyan, Balasubramaniyan
This study sought to translate and validate the voice disorder outcome profile (V-DOP) for Tamil-speaking populations. It was implemented in two phases: the English language V-DOP developed for an Indian population was first translated into Tamil, a south Indian Dravidian language. Five Tamil language experts verified the translated version for exactness of meaning and usage. The expert's comments and suggestions were used to select the questions for the final V-DOP, thus establishing content validity. Then the translated V-DOP was administered to 95 subjects (75 in clinical and 20 in nonclinical group) for reliability (item-total correlation) and validity (construct) measures. The overall Cronbach coefficient α for V-DOP was 0.89 whereas the mean total V-DOP score was zero for the nonclinical group and 104.28 for the clinical group (standard deviation = 64.71). The emotional and functional domains indicated a statistically significant correlation (r = 0.91 and r = 0.90 respectively), followed by the physical domain (r = 0.82) with the total scores. A significant, but moderate correlation was obtained across V-DOP domains (r = 0.50 to 0.60; P Tamil is a valid and reliable tool for evaluating the impact of voice disorders in Tamil-speaking population. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Gutierrez-Sigut, Eva; Daws, Richard; Payne, Heather; Blott, Jonathan; Marshall, Chloë; MacSweeney, Mairéad
Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John
Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…
... 8 Aliens and Nationality 1 2010-01-01 2010-01-01 false Translation of documents. 1003.33 Section... PROVISIONS EXECUTIVE OFFICE FOR IMMIGRATION REVIEW Immigration Court-Rules of Procedure § 1003.33 Translation... by an English language translation and a certification signed by the translator that must be printed...
Bruna Di Sabato
Full Text Available In the course of the last century, translation employed as a tool for foreign language acquisition has suffered alternate fates. From being the approach, par excellence, employed in rote learning in the days of lexicogrammatical-translation methodology, it soon slipped into disuse (and disgrace with the advent of progressive communicative educational theories. Though never wholly absent in actual classroom practice and always present in the work of some bold scholars, it has recently been rehabilitated on the wave of studies regarding the use of the learners’ own language within the classroom, against the theoretical backdrop of research in the field of cross-lingual teaching, translanguaging and intercomprehension; all activities which recognize the fundamental role of the interlinguistic and intercultural component in language learning. This paper focuses on the Italian scenario, it traces the role translation has played and currently plays in the Italian foreign language university curricula and outlines the many benefits which can derive from its inclusive use as a learning technique in the light of contemporary didactic methodologies.
Full Text Available This contribution reviews the idea of discovery learning with corpora, proposed in the 1990s, evaluating its potential and its implications with reference to the education of translators today. The rationale behind this approach to data-driven learning, combining project-based and form-focused instruction within a socio-constructivistically inspired environment, is discussed. Examples are also provided of authentic, open-ended learning experiences, thanks to which students of translation share responsibility over the development of corpora and their consultation, and teachers can abandon the challenging role of omniscient knowledge providers and wear the more honest hat of "learning experts". Adding to the more straightforward uses of corpora in courses that aim to develop thematic, technological and information mining competences – i.e., in which training is offered in the use of corpora as professional aids –, attention is focused on foreign language teaching for translators and on corpora as learning aids, highlighting their potential for the development of the three other European Master's in Translation (EMT competences (translation service provision, language and intercultural ones.
This contribution reviews the idea of discovery learning with corpora, proposed in the 1990s, evaluating its potential and its implications with reference to the education of translators today. The rationale behind this approach to data-driven learning, combining project-based and form-focused instruction within a socio-constructivistically inspired environment, is discussed. Examples are also provided of authentic, open-ended learning experiences, thanks to which students of translation share responsibility over the development of corpora and their consultation, and teachers can abandon the challenging role of omniscient knowledge providers and wear the more honest hat of "learning experts". Adding to the more straightforward uses of corpora in courses that aim to develop thematic, technological and information mining competences – i.e., in which training is offered in the use of corpora as professional aids –, attention is focused on foreign language teaching for translators and on corpora as learning aids, highlighting their potential for the development of the three other European Master's in Translation (EMT competences (translation service provision, language and intercultural ones.
Cushman, R.M.; Burtis, M.D.
This report contains English-translated abstracts of important Chinese-language literature concerning global climate change for the years 1995-1998. This body of literature includes the topics of adaptation, ancient climate change, climate variation, the East Asia monsoon, historical climate change, impacts, modeling, and radiation and trace-gas emissions. In addition to the biological citations and abstracts translated into English, this report presents the original citations and abstracts in Chinese. Author and title indexes are included to assist the reader in locating abstracts of particular interest
Research on shared reading has shown positive results on children's literacy development in general and for deaf children specifically; however, reading techniques might differ between these two populations. Families with deaf children, especially those with deaf parents, often capitalize on their children's visual attributes rather than primarily auditory cues. These techniques are believed to provide a foundation for their deaf children's literacy skills. This study examined 10 deaf mother/deaf child dyads with children between 3 and 5 years of age. Dyads were videotaped in their homes on at least two occasions reading books that were provided by the researcher. Descriptive analysis showed specifically how deaf mothers mediate between the two languages, American Sign Language (ASL) and English, while reading. These techniques can be replicated and taught to all parents of deaf children so that they can engage in more effective shared reading activities. Research has shown that shared reading, or the interaction of a parent and child with a book, is an effective way to promote language and literacy, vocabulary, grammatical knowledge, and metalinguistic awareness (Snow, 1983), making it critical for educators to promote shared reading activities at home between parent and child. Not all parents read to their children in the same way. For example, parents of deaf children may present the information in the book differently due to the fact that signed languages are visual rather than spoken. In this vein, we can learn more about what specific connections deaf parents make to the English print. Exploring strategies deaf mothers may use to link the English print through the use of ASL will provide educators with additional tools when working with all parents of deaf children. This article will include a review of the literature on the benefits of shared reading activities for all children, the relationship between ASL and English skill development, and the techniques
Hirshorn, Elizabeth A.; Fernandez, Nina M.; Bavelier, Daphne
Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se. PMID:22871205
McKee, Michael; Thew, Denise; Starr, Matthew; Kushalnagar, Poorna; Reid, John T.; Graybill, Patrick; Velasquez, Julia; Pearson, Thomas
Background Numerous publications demonstrate the importance of community-based participatory research (CBPR) in community health research, but few target the Deaf community. The Deaf community is understudied and underrepresented in health research despite suspected health disparities and communication barriers. Objectives The goal of this paper is to share the lessons learned from the implementation of CBPR in an understudied community of Deaf American Sign Language (ASL) users in the greater Rochester, New York, area. Methods We review the process of CBPR in a Deaf ASL community and identify the lessons learned. Results Key CBPR lessons include the importance of engaging and educating the community about research, ensuring that research benefits the community, using peer-based recruitment strategies, and sustaining community partnerships. These lessons informed subsequent research activities. Conclusions This report focuses on the use of CBPR principles in a Deaf ASL population; lessons learned can be applied to research with other challenging-to-reach populations. PMID:22982845
Full Text Available This article ethnographically explores how American Sign Language-English interpreting students negotiate and foreground different kinds of relationships to claim legitimacy in relation to deaf people and the deaf community. As the field of interpreting is undergoing shifts from community interpreting to professionalization, interpreting students endeavor to legitimize their involvement in the field. Students create distinction between themselves and other students through relational work that involves positive and negative interpretation of kinship terms. In analyzing interpreting students' gate-keeping practices, this article explores the categories and definitions used by interpreting students and argues that there is category trouble that occurs. Identity and kinship categories are not nuanced or critically interrogated, resulting in deaf people and interpreters being represented in static ways.
Yenny Rodríguez Hernández
Full Text Available This paper reports the results of an exploratory study whose purpose was to identify and characterize the metaphors in a sample of five videos in Colombian sign language (in Spanish, lsc.The data were analyzed using theoretical contributions from Lakoff and Johnson’s theories (1980 about cognitive metaphors and image schemata, and from Wilcox (2000 and Taub (2001 on double mapping in sign language. The results show a frequency analysis of image schemata and the metaphors present into metaphorical expressions in five autobiographical narratives by five congenital deaf adults. The study concludes that sign language has cognitive metaphors that let deaf people map from a concrete domain to an abstract one in order to build concepts.
Goel, Amit; Arivazhagan, Karunanithi; Sasi, Avani; Shanmugam, Vanathy; Koshi, Seleena; Pottakkat, Biju; Lakshmi, C P; Awasthi, Ashish
Chronic liver disease questionnaire (CLDQ), a self-administered quality-of-life (QOL) instrument for chronic liver disease (CLD) patients, was originally developed in English language. We aimed to translate and validate CLDQ in Tamil language (CLDQ-T). CLDQ-T, prepared by two forward and two backward independent translations by four bilingual (Tamil and English) persons, and repeated iterative modifications, was validated in adult, native-Tamil patients with CLD. CLDQ-T was re-tested in some patients 2 weeks later. Convergent validity was assessed using Spearman's correlation, and discriminant validity by comparison with World Health Organization's brief QOL tool (WHOQOL-BREF). Reliability was assessed through internal consistency (Cronbach's alpha) and test-retest reliability (intra-class correlation). Cutoff used for statistical significance was p0.700 for individual domains. CLDQ-T was easily understood and showed good performance characteristics in assessing QOL in Tamil-speaking patients with CLD.
This article presents a modular activity on the neurobiology of sign language that engages undergraduate students in reading and analyzing the primary functional magnetic resonance imaging (fMRI) literature. Drawing on a seed empirical article and subsequently published critique and rebuttal, students are introduced to a scientific debate concerning the functional significance of right-hemisphere recruitment observed in some fMRI studies of sign language processing. The activity requires minimal background knowledge and is not designed to provide students with a specific conclusion regarding the debate. Instead, the activity and set of articles allow students to consider key issues in experimental design and analysis of the primary literature, including critical thinking regarding the cognitive subtractions used in blocked-design fMRI studies, as well as possible confounds in comparing results across different experimental tasks. By presenting articles representing different perspectives, each cogently argued by leading scientists, the readings and activity also model the type of debate and dialogue critical to science, but often invisible to undergraduate science students. Student self-report data indicate that undergraduates find the readings interesting and that the activity enhances their ability to read and interpret primary fMRI articles, including evaluating research design and considering alternate explanations of study results. As a stand-alone activity completed primarily in one 60-minute class block, the activity can be easily incorporated into existing courses, providing students with an introduction both to the analysis of empirical fMRI articles and to the role of debate and critique in the field of neuroscience.
Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte
It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and…
Caridakis, G; Karpouzis, K; Drosopoulos, A; Kollias, S
Modeling and recognizing spatiotemporal, as opposed to static input, is a challenging task since it incorporates input dynamics as part of the problem. The vast majority of existing methods tackle the problem as an extension of the static counterpart, using dynamics, such as input derivatives, at feature level and adopting artificial intelligence and machine learning techniques originally designed for solving problems that do not specifically address the temporal aspect. The proposed approach deals with temporal and spatial aspects of the spatiotemporal domain in a discriminative as well as coupling manner. Self Organizing Maps (SOM) model the spatial aspect of the problem and Markov models its temporal counterpart. Incorporation of adjacency, both in training and classification, enhances the overall architecture with robustness and adaptability. The proposed scheme is validated both theoretically, through an error propagation study, and experimentally, on the recognition of individual signs, performed by different, native Greek Sign Language users. Results illustrate the architecture's superiority when compared to Hidden Markov Model techniques and variations both in terms of classification performance and computational cost. Copyright © 2012 Elsevier Ltd. All rights reserved.
Collins, W. R.; Knight, J. C.; Noonan, R. E.
In order to implement high level languages whenever possible, a translator writing system of advanced design was developed. It is intended for routine production use by many programmers working on different projects. As well as a fairly conventional parser generator, it includes a system for the rapid generation of table driven code generators. The parser generator was developed from a prototype version. The translator writing system includes various tools for the management of the source text of a compiler under construction. In addition, it supplies various default source code sections so that its output is always compilable and executable. The system thereby encourages iterative enhancement as a development methodology by ensuring an executable program from the earliest stages of a compiler development project. The translator writing system includes PASCAL/48 compiler, three assemblers, and two compilers for a subset of HAL/S.
Full Text Available This article analyses five recent Japanese short stories written by women, with female first person narrators, and the English translations of these stories. I examine how the writers interact with the culturally loaded concept of gendered language to develop characters and themes. The strategies used by translators to render gendered styles into English are also discussed: case-by-case creative solutions appear most effective. ‘Feminine’ and other gendered styles are used to index social identity, to highlight the difference between the social and inner self, and different styles are mixed together for impact. Gendered styles, therefore, are of central importance and translators wishing to adhere closely to the source text should pay close attention to them. All the narrators of the stories demonstrate an understanding of ‘social sanction and taboo’. Two accustom themselves to a socially acceptable future, another displays an uneasy attitude to language and convention, while others fall into stereotypes imposed on them or chastise themselves for inappropriate behaviour. The stories illustrate the way in which gendered language styles in Japanese can be manipulated, as both the writers and the characters they create deliberately use different styles for effect.
Bittner, Anja; Jonietz, Ansgar; Bittner, Johannes; Beickert, Luise; Harendza, Sigrid
To train and assess undergraduate medical students' written communication skills by exercises in translating medical reports into plain language for real patients. 27 medical students participated in a newly developed communication course. They attended a 3-h seminar including a briefing on patient-centered communication and an introduction to working with the internet platform http://washabich.de. In the following ten weeks, participants "translated" one medical report every fortnight on this platform receiving feedback by a near-peer supervisor. A pre- and post-course assignment consisted of a self-assessment questionnaire on communication skills, analysis of a medical text with respect to medical jargon, and the translation of a medical report into plain language. In the self-assessment, students rated themselves in most aspects of patient-centered communication significantly higher after attending the course. After the course they marked significantly more medical jargon terms correctly than before (pcommunicative aspects (pcommunication skills and medical knowledge in undergraduate medical students. To include translation exercises in the undergraduate medical curriculum. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Abstract - Evidence of Shakespeare’s interest in food preparation and cooking is recurrent throughout his works, though the difficulties provided by the translation of such figurative language have attracted much less interest among scholars. Building on some earlier research (Scarpa 1995a, 1995b and some more recent publications (Fitzpatrick 2007, 2011 on the language of food, taste and cooking in Shakespeare’s plays, the paper discusses some instances of the translation into Italian by different translators of this often very culture-specific knowledge and terminology in terms of the difficulty of translating such imagery in the target language when trying to maintain the language of food. This specialized language may in fact be considered to fall into the Bard’s language of “things” and, as such, stands most in danger of becoming archaic and posing a problem for translators with a different historical and cultural background. The examples will mainly be drawn from the two practical operations of the baking of bread, cakes and pastry, and the preparation and cooking of meat. It will be argued that the translation approach most suited to all food references in Shakespeare’s plays is a reader-centred approach and in the conclusion some remarks will also be made on other reader-centred approaches to Shakespeare’s language outside the boundaries of Translation Studies which can have a positive impact on revitalizing Shakespeare for a contemporary audience. Riassunto - Sebbene l’interesse di Shakespeare nei confronti delle diverse modalità di preparazione e cottura del cibo sia ben documentato in tutta la sua produzione, la difficoltà di tradurre le immagini legate al linguaggio culinario – figurato e non – nelle sue opere è una tematica che ha tuttavia attratto relativamente poco interesse da parte degli studiosi di traduzione. Traendo spunto da una ricerca intrapresa molti anni fa (Scarpa 1995a, 1995b e da alcune
Colin, C; Zuinen, T; Bayard, C; Leybaert, J
Sign languages (SL), like oral languages (OL), organize elementary, meaningless units into meaningful semantic units. Our aim was to compare, at behavioral and neurophysiological levels, the processing of the location parameter in French Belgian SL to that of the rhyme in oral French. Ten hearing and 10 profoundly deaf adults performed a rhyme judgment task in OL and a similarity judgment on location in SL. Stimuli were pairs of pictures. As regards OL, deaf subjects' performances, although above chance level, were significantly lower than that of hearing subjects, suggesting that a metaphonological analysis is possible for deaf people but rests on phonological representations that are less precise than in hearing people. As regards SL, deaf subjects scores indicated that a metaphonological judgment may be performed on location. The contingent negative variation (CNV) evoked by the first picture of a pair was similar in hearing subjects in OL and in deaf subjects in OL and SL. However, an N400 evoked by the second picture of the non-rhyming pairs was evidenced only in hearing subjects in OL. The absence of N400 in deaf subjects may be interpreted as the failure to associate two words according to their rhyme in OL or to their location in SL. Although deaf participants can perform metaphonological judgments in OL, they differ from hearing participants both behaviorally and in ERP. Judgment of location in SL is possible for deaf signers, but, contrary to rhyme judgment in hearing participants, does not elicit any N400. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Haines, Kevin; Dijk, Anje
The CEFR will only achieve its potential in higher education if it is embedded in a meaningful way in the wider processes of the university. One means of embedding the CEFR is through policy, and in this article we report the development of a language policy in the broader context of
Full Text Available Translation of children fantasy novels and problems faced by translators in translating these novels into different languages is one of the core issues in the field of translation studies. This issue has got attention of many researchers and an extensive study has been carried out on various novels. The Harry Potter series of novels written by British author J.K. Rowling is one of the famous children fantasy novels that gained popularity worldwide and was translated into 73 languages. The use of various cultural terms and made up words in the novel has posed a great challenge for the translators. The purpose of the present study is to identify these cultural related terms and made up words in the novel “Harry Potter and the Chamber of Secrets” and to investigate the strategies used by the translator in translating them into Urdu language. A descriptive analysis of the translation of culture related items and made up words was made using the strategies proposed by Davies (2003. The findings of this research showed that translator mostly emphasized and predominantly used localization and transformation strategies for food items, magical objects and imaginative words.
Beal-Alvarez, Jennifer S.; Scheetz, Nanci A.
In deaf education, the sign language skills of teacher and interpreter candidates are infrequently assessed; when they are, formal measures are commonly used upon preparation program completion, as opposed to informal measures related to instructional tasks. Using an informal picture storybook task, the authors investigated the receptive and…
Kamnardsiri, Teerawat; Hongsit, Ler-on; Khuwuthyakorn, Pattaraporn; Wongta, Noppon
This paper investigated students' achievement for learning American Sign Language (ASL), using two different methods. There were two groups of samples. The first experimental group (Group A) was the game-based learning for ASL, using Kinect. The second control learning group (Group B) was the traditional face-to-face learning method, generally…
Deaf people have long held the belief that American Sign Language (ASL) plays a significant role in the academic development of deaf children. Despite this, the education of deaf children has historically been exclusive of ASL and constructed as an English-only, deficit-based pedagogy. Newer research, however, finds a strong correlation between…