WorldWideScience

Sample records for abstract orthographic codes

  1. Masked form priming in writing words from pictures: evidence for direct retrieval of orthographic codes.

    Science.gov (United States)

    Bonin, P; Fayol, M; Peereman, R

    1998-09-01

    Three experiments used the masked priming paradigm to investigate the role of orthographic and phonological information in written picture naming. In all the experiments, participants had to write the names of pictures as quickly as possible under three different priming conditions. Nonword primes could be: (1) phonologically and orthographically related to the picture name; (2) orthographically related as in (1) but phonologically related to a lesser degree than in (1); (3) orthographically and phonologically unrelated except for the first consonant (or consonant cluster). Orthographic priming effects were observed with a prime exposure duration of 34 ms (Experiments 1 and 2) and of 51 ms (Experiment 3). In none of the experiments, did homophony between primes and picture names yield an additional advantage. Taken together, these findings support the view of the direct retrieval of orthographic information through lexical access in written picture naming, and thus argue against the traditional view that the retrieval of orthographic codes of obligatorily mediated by phonology.

  2. Developing a universal model of reading necessitates cracking the orthographic code.

    Science.gov (United States)

    Davis, Colin J

    2012-10-01

    I argue, contra Frost, that when prime lexicality and target density are considered, it is not clear that there are fundamental differences between form priming effects in Semitic and European languages. Furthermore, identifying and naming printed words in these languages raises common theoretical problems. Solving these problems and developing a universal model of reading necessitates "cracking" the orthographic input code.

  3. Phonetic radicals, not phonological coding systems, support orthographic learning via self-teaching in Chinese.

    Science.gov (United States)

    Li, Luan; Wang, Hua-Chen; Castles, Anne; Hsieh, Miao-Ling; Marinus, Eva

    2018-07-01

    According to the self-teaching hypothesis (Share, 1995), phonological decoding is fundamental to acquiring orthographic representations of novel written words. However, phonological decoding is not straightforward in non-alphabetic scripts such as Chinese, where words are presented as characters. Here, we present the first study investigating the role of phonological decoding in orthographic learning in Chinese. We examined two possible types of phonological decoding: the use of phonetic radicals, an internal phonological aid, andthe use of Zhuyin, an external phonological coding system. Seventy-three Grade 2 children were taught the pronunciations and meanings of twelve novel compound characters over four days. They were then exposed to the written characters in short stories, and were assessed on their reading accuracy and on their subsequent orthographic learning via orthographic choice and spelling tasks. The novel characters were assigned three different types of pronunciation in relation to its phonetic radical - (1) a pronunciation that is identical to the phonetic radical in isolation; (2) a common alternative pronunciation associated with the phonetic radical when it appears in other characters; and (3) a pronunciation that is unrelated to the phonetic radical. The presence of Zhuyin was also manipulated. The children read the novel characters more accurately when phonological cues from the phonetic radicals were available and in the presence of Zhuyin. However, only the phonetic radicals facilitated orthographic learning. The findings provide the first empirical evidence of orthographic learning via self-teaching in Chinese, and reveal how phonological decoding functions to support learning in non-alphabetic writing systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Differential cognitive processing of Kanji and Kana words: do orthographic and semantic codes function in parallel in word matching task.

    Science.gov (United States)

    Kawakami, A; Hatta, T; Kogure, T

    2001-12-01

    Relative engagements of the orthographic and semantic codes in Kanji and Hiragana word recognition were investigated. In Exp. 1, subjects judged whether the pairs of Kanji words (prime and target) presented sequentially were physically identical to each other in the word condition. In the sentence condition, subjects decided whether the target word was valid for the prime sentence presented in advance. The results showed that the response times to the target swords orthographically similar (to the prime) were significantly slower than to semantically related target words in the word condition and that this was also the case in the sentence condition. In Exp. 2, subjects judged whether the target word written in Hiragana was physically identical to the prime word in the word condition. In the sentence condition, subjects decided if the target word was valid for the previously presented prime sentence. Analysis indicated that response times to orthographically similar words were slower than to semantically related words in the word condition but not in the sentence condition wherein the response times to the semantically and orthographically similar words were largely the same. Based on these results, differential contributions of orthographic and semantic codes in cognitive processing of Japanese Kanji and Hiragana words was discussed.

  5. Examining the Role of Orthographic Coding Ability in Elementary Students with Previously Identified Reading Disability, Speech or Language Impairment, or Comorbid Language and Learning Disabilities

    Science.gov (United States)

    Haugh, Erin Kathleen

    2017-01-01

    The purpose of this study was to examine the role orthographic coding might play in distinguishing between membership in groups of language-based disability types. The sample consisted of 36 second and third-grade subjects who were administered the PAL-II Receptive Coding and Word Choice Accuracy subtest as a measure of orthographic coding…

  6. The temporal courses of phonological and orthographic encoding in handwritten production in Chinese: An ERP study

    Directory of Open Access Journals (Sweden)

    Qingfang Zhang

    2016-08-01

    Full Text Available A central issue in written production concerns how phonological codes influence the output of orthographic codes. We used a picture-word interference paradigm combined with the event-related potential technique to investigate the temporal courses of phonological and orthographic activation and their interplay in Chinese writing. Distractors were orthographically related, phonologically related, orthographically plus phonologically related, or unrelated to picture names. The behavioral results replicated the classic facilitation effect for all three types of relatedness. The ERP results indicated an orthographic effect in the time window of 370 to 500 ms (onset latency: 370 ms, a phonological effect in the time window of 460 to 500 ms (onset latency: 464 ms, and an additive pattern of both effects in both time windows, thus indicating that orthographic codes were accessed earlier than, and independent of, phonological codes in written production. The orthographic activation originates from the semantic system, whereas the phonological effect results from the activation spreading from the orthographic lexicon to the phonological lexicon. These findings substantially strengthen the existing evidence that shows that access to orthographic codes is not mediated by phonological information, and they provide important support for the orthographic autonomy hypothesis.

  7. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  8. Orthographic effects in spoken word recognition: Evidence from Chinese.

    Science.gov (United States)

    Qu, Qingqing; Damian, Markus F

    2017-06-01

    Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.

  9. Time course analyses of orthographic and phonological priming effects during word recognition in a transparent orthography.

    Science.gov (United States)

    Zeguers, M H T; Snellings, P; Huizenga, H M; van der Molen, M W

    2014-10-01

    In opaque orthographies, the activation of orthographic and phonological codes follows distinct time courses during visual word recognition. However, it is unclear how orthography and phonology are accessed in more transparent orthographies. Therefore, we conducted time course analyses of masked priming effects in the transparent Dutch orthography. The first study used targets with small phonological differences between phonological and orthographic primes, which are typical in transparent orthographies. Results showed consistent orthographic priming effects, yet phonological priming effects were absent. The second study explicitly manipulated the strength of the phonological difference and revealed that both orthographic and phonological priming effects became identifiable when phonological differences were strong enough. This suggests that, similar to opaque orthographies, strong phonological differences are a prerequisite to separate orthographic and phonological priming effects in transparent orthographies. Orthographic and phonological priming appeared to follow distinct time courses, with orthographic codes being quickly translated into phonological codes and phonology dominating the remainder of the lexical access phase.

  10. A dual-route approach to orthographic processing.

    Science.gov (United States)

    Grainger, Jonathan; Ziegler, Johannes C

    2011-01-01

    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).

  11. A dual-route approach to orthographic processing

    Directory of Open Access Journals (Sweden)

    Jonathan eGrainger

    2011-04-01

    Full Text Available In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint, and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint. These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes and for the sublexical translation of print to sound (multi-letter graphemes.

  12. Abstraction carrying code and resource-awareness

    OpenAIRE

    Hermenegildo, Manuel V.; Albert Albiol, Elvira; López García, Pedro; Puebla Sánchez, Alvaro Germán

    2005-01-01

    Proof-Carrying Code (PCC) is a general approach to mobile code safety in which the code supplier augments the program with a certifícate (or proof). The intended benefit is that the program consumer can locally validate the certifícate w.r.t. the "untrusted" program by means of a certifícate checker—a process which should be much simpler, eíñcient, and automatic than generating the original proof. Abstraction Carrying Code (ACC) is an enabling technology for PCC in which an abstract mod...

  13. Boosting orthographic learning during independent reading

    DEFF Research Database (Denmark)

    Nielsen, Anne-Mette Veber

    2016-01-01

    . The present training study was conducted to assess experimentally whether this relation between prior orthographic knowledge and orthographic learning while reading is causal by assessing whether instruction designed to increase sublexical orthographic knowledge would facilitate orthographic learning during......Research has shown that phonological decoding is critical for orthographic learning of new words during independent reading. Moreover, correlational studies have demonstrated that the strength of orthographic learning is related to the orthographic knowledge with which readers approach a text...... independent reading. A group of Danish-speaking third graders (n = 21) was taught conditional spelling patterns conforming to the opaque Danish writing system, with emphasis on how to map the spellings onto their pronunciations. A matched control group (n = 21) received no treatment. Both groups were exposed...

  14. Transposed-letter priming of prelexical orthographic representations.

    Science.gov (United States)

    Kinoshita, Sachiko; Norris, Dennis

    2009-01-01

    A prime generated by transposing two internal letters (e.g., jugde) produces strong priming of the original word (judge). In lexical decision, this transposed-letter (TL) priming effect is generally weak or absent for nonword targets; thus, it is unclear whether the origin of this effect is lexical or prelexical. The authors describe the Bayesian Reader theory of masked priming (D. Norris & S. Kinoshita, 2008), which explains why nonwords do not show priming in lexical decision but why they do in the cross-case same-different task. This analysis is followed by 3 experiments that show that priming in this task is not based on low-level perceptual similarity between the prime and target, or on phonology, to make the case that priming is based on prelexical orthographic representation. The authors then use this task to demonstrate equivalent TL priming effects for nonwords and words. The results are interpreted as the first reliable evidence based on the masked priming procedure that letter position is not coded absolutely within the prelexical, orthographic representation. The implications of the results for current letter position coding schemes are discussed.

  15. Orthographic familiarity, phonological legality and number of orthographic neighbours affect the onset of ERP lexical effects

    Directory of Open Access Journals (Sweden)

    Adorni Roberta

    2008-07-01

    Full Text Available Abstract Background It has been suggested that the variability among studies in the onset of lexical effects may be due to a series of methodological differences. In this study we investigated the role of orthographic familiarity, phonological legality and number of orthographic neighbours of words in determining the onset of word/non-word discriminative responses. Methods ERPs were recorded from 128 sites in 16 Italian University students engaged in a lexical decision task. Stimuli were 100 words, 100 quasi-words (obtained by the replacement of a single letter, 100 pseudo-words (non-derived and 100 illegal letter strings. All stimuli were balanced for length; words and quasi-words were also balanced for frequency of use, domain of semantic category and imageability. SwLORETA source reconstruction was performed on ERP difference waves of interest. Results Overall, the data provided evidence that the latency of lexical effects (word/non-word discrimination varied as a function of the number of a word's orthographic neighbours, being shorter to non-derived than to derived pseudo-words. This suggests some caveats about the use in lexical decision paradigms of quasi-words obtained by transposing or replacing only 1 or 2 letters. Our findings also showed that the left-occipito/temporal area, reflecting the activity of the left fusiform gyrus (BA37 of the temporal lobe, was affected by the visual familiarity of words, thus explaining its lexical sensitivity (word vs. non-word discrimination. The temporo-parietal area was markedly sensitive to phonological legality exhibiting a clear-cut discriminative response between illegal and legal strings as early as 250 ms of latency. Conclusion The onset of lexical effects in a lexical decision paradigm depends on a series of factors, including orthographic familiarity, degree of global lexical activity, and phonologic legality of non-words.

  16. Reading Comprehension in Boys with ADHD: The Mediating Roles of Working Memory and Orthographic Conversion.

    Science.gov (United States)

    Friedman, Lauren M; Rapport, Mark D; Raiker, Joseph S; Orban, Sarah A; Eckrich, Samuel J

    2017-02-01

    Reading comprehension difficulties in children with ADHD are well established; however, limited information exists concerning the cognitive mechanisms that contribute to these difficulties and the extent to which they interact with one another. The current study examines two broad cognitive processes known to be involved in children's reading comprehension abilities-(a) working memory (i.e., central executive processes [CE], phonological short-term memory [PH STM], and visuospatial short-term memory [VS STM]) and (b) orthographic conversion (i.e., conversion of visually presented text to a phonological code)-to elucidate their unique and interactive contribution to ADHD-related reading comprehension differences. Thirty-one boys with ADHD-combined type and 30 typically developing (TD) boys aged 8 to 12 years (M = 9.64, SD = 1.22) were administered multiple counterbalanced tasks assessing WM and orthographic conversion processes. Relative to TD boys, boys with ADHD exhibited significant deficits in PH STM (d = -0.70), VS STM (d = -0.92), CE (d = -1.58), and orthographic conversion (d = -0.93). Bias-corrected, bootstrapped mediation analyses revealed that CE and orthographic conversion processes modeled separately mediated ADHD-related reading comprehension differences partially, whereas PH STM and VS STM did not. CE and orthographic conversion modeled jointly mediated ADHD-related reading comprehension differences fully wherein orthographic conversion's large magnitude influence on reading comprehension occurred indirectly through CE's impact on the orthographic system. The findings suggest that adaptive cognitive interventions designed to improve reading-related outcomes in children with ADHD may benefit by including modules that train CE and orthographic conversion processes independently and interactively.

  17. Reversible machine code and its abstract processor architecture

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Glück, Robert; Yokoyama, Tetsuo

    2007-01-01

    A reversible abstract machine architecture and its reversible machine code are presented and formalized. For machine code to be reversible, both the underlying control logic and each instruction must be reversible. A general class of machine instruction sets was proven to be reversible, building...

  18. Orthographic Learning in Spanish Children

    Science.gov (United States)

    Suárez-Coalla, Paz; Álvarez-Cañizo, Marta; Cuetos, Fernando

    2016-01-01

    In order to read fluently, children have to form orthographic representations. Despite numerous investigations, there is no clear answer to the question of the number of times they need to read a word to form an orthographic representation. We used length effect on reading times as a measure, because there are large differences between long and…

  19. Tracking orthographic learning in children with different types of dyslexia

    Directory of Open Access Journals (Sweden)

    Hua-Chen eWang

    2014-07-01

    Full Text Available Previous studies have found that children with reading difficulties need more exposures to acquire the representations needed to support fluent reading than typically developing readers (e.g., Ehri & Saltmarsh, 1995. Building on existing orthographic learning paradigms, we report on an investigation of orthographic learning in poor readers using a new learning task tracking both the accuracy (untimed exposure duration and fluency (200ms exposure duration of learning novel words over trials. In study 1, we used the paradigm to examine orthographic learning in children with specific poor reader profiles (9 with a surface profile, 9 a phonological profile and 9 age-matched controls. Both profiles showed improvement over the learning cycles, but the children with surface profile showed impaired orthographic learning in spelling and orthographic choice tasks. Study 2 explored predictors of orthographic learning in a group of 91 poor readers using the same outcome measures as in Study 1. Consistent with earlier findings in typically developing readers, phonological decoding skill predicted orthographic learning. Moreover, orthographic knowledge significantly predicted orthographic learning over and beyond phonological decoding. The two studies provide insights into how poor readers learn novel words, and how their learning process may be compromised by less proficient orthographic and/or phonological skills.

  20. Orthographic Skills Important to Chinese Literacy Development: The Role of Radical Representation and Orthographic Memory of Radicals

    Science.gov (United States)

    Yeung, Pui-sze; Ho, Connie Suk-han; Chan, David Wai-ock; Chung, Kevin Kien-hoa

    2016-01-01

    A 3-year longitudinal study among 239 Chinese students in Grades 2-4 was conducted to investigate the relationships between orthographic skills (including positional and functional knowledge of semantic radicals and phonetic radicals, and orthographic memory of radicals) and Chinese literacy skills (word reading, word spelling, reading…

  1. English Orthographic Learning in Chinese-L1 Young EFL Beginners.

    Science.gov (United States)

    Cheng, Yu-Lin

    2017-12-01

    English orthographic learning, among Chinese-L1 children who were beginning to learn English as a foreign language, was documented when: (1) only visual memory was at their disposal, (2) visual memory and either some letter-sound knowledge or some semantic information was available, and (3) visual memory, some letter-sound knowledge and some semantic information were all available. When only visual memory was available, orthographic learning (measured via an orthographic choice test) was meagre. Orthographic learning was significant when either semantic information or letter-sound knowledge supplemented visual memory, with letter-sound knowledge generating greater significance. Although the results suggest that letter-sound knowledge plays a more important role than semantic information, letter-sound knowledge alone does not suffice to achieve perfect orthographic learning, as orthographic learning was greatest when letter-sound knowledge and semantic information were both available. The present findings are congruent with a view that the orthography of a foreign language drives its orthographic learning more than L1 orthographic learning experience, thus extending Share's (Cognition 55:151-218, 1995) self-teaching hypothesis to include non-alphabetic L1 children's orthographic learning of an alphabetic foreign language. The little letter-sound knowledge development observed in the experiment-I control group indicates that very little letter-sound knowledge develops in the absence of dedicated letter-sound training. Given the important role of letter-sound knowledge in English orthographic learning, dedicated letter-sound instruction is highly recommended.

  2. Orthographic Learning in Dyslexic Spanish Children

    Science.gov (United States)

    Suárez-Coalla, Paz; Ramos, Sara; Álvarez-Cañizo, Marta; Cuetos, Fernando

    2014-01-01

    Reading fluency is one of the basic processes of learning to read. Children begin to develop fluency when they are able to form orthographic representations of words, which provide direct, smooth, and fast reading. Dyslexic children of transparent orthographic systems are mainly characterized by poor reading fluency (Cuetos & Suárez-Coalla…

  3. Functional and anatomical dissociation between the orthographic lexicon and the orthographic buffer revealed in reading and writing Chinese characters by fMRI.

    Science.gov (United States)

    Chen, Hsiang-Yu; Chang, Erik C; Chen, Sinead H Y; Lin, Yi-Chen; Wu, Denise H

    2016-04-01

    The contribution of orthographic representations to reading and writing has been intensively investigated in the literature. However, the distinction between neuronal correlates of the orthographic lexicon and the orthographic (graphemic) buffer has rarely been examined in alphabetic languages and never been explored in non-alphabetic languages. To determine whether the neural networks associated with the orthographic lexicon and buffer of logographic materials are comparable to those reported in the literature, the present fMRI experiment manipulated frequency and the stroke number of Chinese characters in the tasks of form judgment and stroke judgment, which emphasized the processing of character recognition and writing, respectively. It was found that the left fusiform gyrus exhibited higher activation when encountering low-frequency than high-frequency characters in both tasks, which suggested this region to be the locus of the orthographic lexicon that represents the knowledge of character forms. On the other hand, the activations in the posterior part of the left middle frontal gyrus and in the left angular gyrus were parametrically modulated by the stroke number of target characters only in the stroke judgment task, which suggested these regions to be the locus of the orthographic buffer that represents the processing of stroke sequence in writing. These results provide the first evidence for the functional and anatomical dissociation between the orthographic lexicon and buffer in reading and writing Chinese characters. They also demonstrate the critical roles of the left fusiform area and the frontoparietal network to the long-term and short-term representations of orthographic knowledge, respectively, across different orthographies. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    Science.gov (United States)

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  5. Orthographic recognition in late adolescents: an assessment through event-related brain potentials.

    Science.gov (United States)

    González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth

    2014-04-01

    Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.

  6. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  7. Learner-generated drawing for phonological and orthographic dyslexic readers.

    Science.gov (United States)

    Wang, Li-Chih; Yang, Hsien-Ming; Tasi, Hung-Ju; Chan, Shih-Yi

    2013-01-01

    This study presents an examination of learner-generated drawing for different reading comprehension subtypes of dyslexic students and control students. The participants were 22 phonological dyslexic students, 20 orthographic dyslexic students, 21 double-deficit dyslexic students, and 45 age-, gender-, and IQ-matched control students. The major evaluation tools included word recognition task, orthographic task, phonological awareness task, and scenery texts and questions. Comparisons of the four groups of students showed differences among phonological dyslexia, orthographic dyslexia, double-deficit dyslexia, and the chronological age control groups in pre- and posttest performance of scenery texts. Differences also existed in relevant questions and the effect of the learner-generated drawing method. The pretest performance showed problems in the dyslexic samples in reading the scenery texts and answering relevant questions. The posttest performance revealed certain differences among phonological dyslexia, orthographic dyslexia, double-deficit dyslexia, and the chronological age control group. Finally, all dyslexic groups obtained a great effect from using the learner-generated drawing, particularly orthographic dyslexia. These results suggest that the learner-generated drawing was also useful for dyslexic students, with the potential for use in the classroom for teaching text reading to dyslexic students. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Dyslexic Children Show Atypical Cerebellar Activation and Cerebro-Cerebellar Functional Connectivity in Orthographic and Phonological Processing.

    Science.gov (United States)

    Feng, Xiaoxia; Li, Le; Zhang, Manli; Yang, Xiujie; Tian, Mengyu; Xie, Weiyi; Lu, Yao; Liu, Li; Bélanger, Nathalie N; Meng, Xiangzhi; Ding, Guosheng

    2017-04-01

    Previous neuroimaging studies have found atypical cerebellar activation in individuals with dyslexia in either motor-related tasks or language tasks. However, studies investigating atypical cerebellar activation in individuals with dyslexia have mostly used tasks tapping phonological processing. A question that is yet unanswered is whether the cerebellum in individuals with dyslexia functions properly during orthographic processing of words, as growing evidence shows that the cerebellum is also involved in visual and spatial processing. Here, we investigated cerebellar activation and cerebro-cerebellar functional connectivity during word processing in dyslexic readers and typically developing readers using tasks that tap orthographic and phonological codes. In children with dyslexia, we observed an abnormally higher engagement of the bilateral cerebellum for the orthographic task, which was negatively correlated with literacy measures. The greater the reading impairment was for young dyslexic readers, the stronger the cerebellar activation was. This suggests a compensatory role of the cerebellum in reading for children with dyslexia. In addition, a tendency for higher cerebellar activation in dyslexic readers was found in the phonological task. Moreover, the functional connectivity was stronger for dyslexic readers relative to typically developing readers between the lobule VI of the right cerebellum and the left fusiform gyrus during the orthographic task and between the lobule VI of the left cerebellum and the left supramarginal gyrus during the phonological task. This pattern of results suggests that the cerebellum compensates for reading impairment through the connections with specific brain regions responsible for the ongoing reading task. These findings enhance our understanding of the cerebellum's involvement in reading and reading impairment.

  9. The Effects of Orthographic Depth on Learning to Read Alphabetic, Syllabic, and Logographic Scripts

    Science.gov (United States)

    Ellis, Nick C.; Natsume, Miwa; Stavropoulou, Katerina; Hoxhallari, Lorenc; Van Daal, Victor H.P.; Polyzoe, Nicoletta; Tsipa, Maria-Louisa; Petalas, Michalis

    2004-01-01

    This study investigated the effects of orthographic depth on reading acquisition in alphabetic, syllabic, and logographic scripts. Children between 6 and 15 years old read aloud in transparent syllabic Japanese hiragana, alphabets of increasing orthographic depth (Albanian, Greek, English), and orthographically opaque Japanese kanji ideograms,…

  10. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  11. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  12. The development of the abilities to acquire novel detailed orthographic representations and maintain them in long-term memory.

    Science.gov (United States)

    Binamé, Florence; Poncelet, Martine

    2016-03-01

    Previous studies have clearly demonstrated that the development of orthographic representations relies on phonological recoding. However, substantial questions persist about the remaining unexplained variance in the acquisition of word-specific orthographic knowledge that is still underspecified. The main aim of this study was to explore whether two cognitive factors-sensitivity to orthographic regularities and short-term memory (STM) for serial order-make independent contributions to the acquisition of novel orthographic representations beyond that of the phonological core component and the level of preexisting word-specific orthographic knowledge. To this end, we had children from second to sixth grades learn novel written word forms using a repeated spelling practice paradigm. The speed at which children learned the word forms and their long-term retention (1week and 1month later) were assessed. Hierarchical regression analyses revealed that phonological recoding, preexisting word-specific orthographic knowledge, and order STM explained a portion of the variance in orthographic learning speed, whereas phonological recoding, preexisting word-specific orthographic knowledge, and orthographic sensitivity each explained a portion of variance in the long-term retention of the newly created orthographic representations. A secondary aim of the study was to determine the developmental trajectory of the abilities to acquire novel orthographic word forms over the course of primary schooling. As expected, results showed an effect of age on both learning speed and long-term retention. The specific roles of orthographic sensitivity and order STM as independent factors involved in different steps of orthographic learning are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Orthographic learning and the role of text-to-speech software in Dutch disabled readers.

    Science.gov (United States)

    Staels, Eva; Van den Broeck, Wim

    2015-01-01

    In this study, we examined whether orthographic learning can be demonstrated in disabled readers learning to read in a transparent orthography (Dutch). In addition, we tested the effect of the use of text-to-speech software, a new form of direct instruction, on orthographic learning. Both research goals were investigated by replicating Share's self-teaching paradigm. A total of 65 disabled Dutch readers were asked to read eight stories containing embedded homophonic pseudoword targets (e.g., Blot/Blod), with or without the support of text-to-speech software. The amount of orthographic learning was assessed 3 or 7 days later by three measures of orthographic learning. First, the results supported the presence of orthographic learning during independent silent reading by demonstrating that target spellings were correctly identified more often, named more quickly, and spelled more accurately than their homophone foils. Our results support the hypothesis that all readers, even poor readers of transparent orthographies, are capable of developing word-specific knowledge. Second, a negative effect of text-to-speech software on orthographic learning was demonstrated in this study. This negative effect was interpreted as the consequence of passively listening to the auditory presentation of the text. We clarify how these results can be interpreted within current theoretical accounts of orthographic learning and briefly discuss implications for remedial interventions. © Hammill Institute on Disabilities 2013.

  14. Lexical orthographic acquisition: Is handwriting better than spelling aloud?

    Directory of Open Access Journals (Sweden)

    Marie-Line eBosse

    2014-02-01

    Full Text Available Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words’ spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2 and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.

  15. A test of the orthographic recoding hypothesis

    Science.gov (United States)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  16. Dissociative effects of orthographic distinctiveness in pure and mixed lists: an item-order account.

    Science.gov (United States)

    McDaniel, Mark A; Cahill, Michael; Bugg, Julie M; Meadow, Nathaniel G

    2011-10-01

    We apply the item-order theory of list composition effects in free recall to the orthographic distinctiveness effect. The item-order account assumes that orthographically distinct items advantage item-specific encoding in both mixed and pure lists, but at the expense of exploiting relational information present in the list. Experiment 1 replicated the typical free recall advantage of orthographically distinct items in mixed lists and the elimination of that advantage in pure lists. Supporting the item-order account, recognition performances indicated that orthographically distinct items received greater item-specific encoding than did orthographically common items in mixed and pure lists (Experiments 1 and 2). Furthermore, order memory (input-output correspondence and sequential contiguity effects) was evident in recall of pure unstructured common lists, but not in recall of unstructured distinct lists (Experiment 1). These combined patterns, although not anticipated by prevailing views, are consistent with an item-order account.

  17. Accessing orthographic representations from speech: the role of left ventral occipitotemporal cortex in spelling.

    Science.gov (United States)

    Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz

    2015-04-01

    The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  18. Orthographic learning in children with isolated and combined reading and spelling deficits.

    Science.gov (United States)

    Mehlhase, Heike; Bakos, Sarolta; Landerl, Karin; Schulte-Körne, Gerd; Moll, Kristina

    2018-05-07

    Dissociations between reading and spelling problems are likely to be associated with different underlying cognitive deficits, and with different deficits in orthographic learning. In order to understand these differences, the current study examined orthographic learning using a printed-word learning paradigm. Children (4th grade) with isolated reading, isolated spelling and combined reading and spelling problems were compared to children with age appropriate reading and spelling skills on their performance during learning novel words and symbols (non-verbal control condition), and during immediate and delayed reading and spelling recall tasks. No group differences occurred in the non-verbal control condition. In the verbal condition, initial learning was intact in all groups, but differences occurred during recall tasks. Children with reading fluency deficits showed slower reading times, while children with spelling deficits were less accurate, both in reading and spelling recall. Children with isolated spelling problems showed no difficulties in immediate spelling recall, but had problems in remembering the spellings 2 hours later. The results suggest that different orthographic learning deficits underlie reading fluency and spelling problems: Children with isolated reading fluency deficits have no difficulties in building-up orthographic representations, but access to these representations is slowed down while children with isolated spelling deficits have problems in storing precise orthographic representations in long-term memory.

  19. Incidental orthographic learning during a color detection task.

    Science.gov (United States)

    Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R

    2017-09-01

    Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Enemies and Friends in the Neighborhood: Orthographic Similarity Effects in Semantic Categorization

    Science.gov (United States)

    Pecher, Diane; Zeelenberg, Rene; Wagenmakers, Eric-Jan

    2005-01-01

    Studies investigating orthographic similarity effects in semantic tasks have produced inconsistent results. The authors investigated orthographic similarity effects in animacy decision and in contrast with previous studies, they took semantic congruency into account. In Experiments 1 and 2, performance to a target (cat) was better if a previously…

  1. Early use of orthographic information in spoken word recognition: Event-related potential evidence from the Korean language.

    Science.gov (United States)

    Kwon, Youan; Choi, Sungmook; Lee, Yoonhyoung

    2016-04-01

    This study examines whether orthographic information is used during prelexical processes in spoken word recognition by investigating ERPs during spoken word processing for Korean words. Differential effects due to orthographic syllable neighborhood size and sound-to-spelling consistency on P200 and N320 were evaluated by recording ERPs from 42 participants during a lexical decision task. The results indicate that P200 was smaller for words whose orthographic syllable neighbors are large in number rather than those that are small. In addition, a word with a large orthographic syllable neighborhood elicited a smaller N320 effect than a word with a small orthographic syllable neighborhood only when the word had inconsistent sound-to-spelling mapping. The results provide support for the assumption that orthographic information is used early during the prelexical spoken word recognition process. © 2015 Society for Psychophysiological Research.

  2. Phonological similarity and orthographic similarity affect probed serial recall of Chinese characters.

    Science.gov (United States)

    Lin, Yi-Chen; Chen, Hsiang-Yu; Lai, Yvonne C; Wu, Denise H

    2015-04-01

    The previous literature on working memory (WM) has indicated that verbal materials are dominantly retained in phonological representations, whereas other linguistic information (e.g., orthography, semantics) only contributes to verbal WM minimally, if not negligibly. Although accumulating evidence has suggested that multiple linguistic components jointly support verbal WM, the visual/orthographic contribution has rarely been addressed in alphabetic languages, possibly due to the difficulty of dissociating the effects of word forms from the effects of their pronunciations in relatively shallow orthography. In the present study, we examined whether the orthographic representations of Chinese characters support the retention of verbal materials in this language of deep orthography. In Experiments 1a and 2, we independently manipulated the phonological and orthographic similarity of horizontal and vertical characters, respectively, and found that participants' accuracy of probed serial recall was reduced by both similar pronunciations and shared phonetic radicals in the to-be-remembered stimuli. Moreover, Experiment 1b showed that only the effect of phonological, but not that of orthographic, similarity was affected by concurrent articulatory suppression. Taken together, the present results indicate the indispensable contribution of orthographic representations to verbal WM of Chinese characters, and suggest that the linguistic characteristics of a specific language not only determine long-term linguistic-processing mechanisms, but also delineate the organization of verbal WM for that language.

  3. The Effect of Orthographic Complexity on Spanish Spelling in Grades 1-3

    Science.gov (United States)

    Ford, Karen; Invernizzi, Marcia; Huang, Francis

    2018-01-01

    This study was designed to identify a continuum of orthographic features that characterize Spanish spelling development in Grades 1-3. Two research questions guided this work: (1) Is there a hierarchy of orthographic features that affect students' spelling accuracy in Spanish over and above other school-level, student-level, and word-level…

  4. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package

  5. Does a pear growl? Interference from semantic properties of orthographic neighbors.

    Science.gov (United States)

    Pecher, Diane; de Rooij, Jimmy; Zeelenberg, René

    2009-07-01

    In this study, we investigated whether semantic properties of a word's orthographic neighbors are activated during visual word recognition. In two experiments, words were presented with a property that was not true for the word itself. We manipulated whether the property was true for an orthographic neighbor of the word. Our results showed that rejection of the property was slower and less accurate when the property was true for a neighbor than when the property was not true for a neighbor. These findings indicate that semantic information is activated before orthographic processing is finished. The present results are problematic for the links model (Forster, 2006; Forster & Hector, 2002) that was recently proposed in order to bring form-first models of visual word recognition into line with previously reported findings (Forster & Hector, 2002; Pecher, Zeelenberg, & Wagenmakers, 2005; Rodd, 2004).

  6. Do reading and spelling share orthographic representations? Evidence from developmental dysgraphia.

    Science.gov (United States)

    Hepner, Christopher; McCloskey, Michael; Rapp, Brenda

    Both spelling and reading depend on knowledge of the spellings of words. Despite this commonality, observed dissociations between spelling and reading in cases of acquired and developmental deficits suggest some degree of independence between the cognitive mechanisms involved in these skills. In this paper, we examine the relationship between spelling and reading in two children with developmental dysgraphia. For both children, we identified significant deficits in spelling that affected the processing of orthographic long-term memory representations of words. We then examined their reading skills for similar difficulties. Even with extensive testing, we found no evidence of a reading deficit for one of the children. We propose that there may be an underlying difficulty that specifically affects the learning of orthographic word representations for spelling. These results lead us to conclude that at least some components of lexical orthographic representation and processing develop with considerable independence in spelling and reading.

  7. The neural bases of orthographic working memory

    Directory of Open Access Journals (Sweden)

    Jeremy Purcell

    2014-04-01

    First, these results reveal a neurotopography of OWM lesion sites that is well-aligned with results from neuroimaging of orthographic working memory in neurally intact participants (Rapp & Dufor, 2011. Second, the dorsal neurotopography of the OWM lesion overlap is clearly distinct from what has been reported for lesions associated with either lexical or sublexical deficits (e.g., Henry, Beeson, Stark, & Rapcsak, 2007; Rapcsak & Beeson, 2004; these have, respectively, been identified with the inferior occipital/temporal and superior temporal/inferior parietal regions. These neurotopographic distinctions support the claims of the computational distinctiveness of long-term vs. working memory operations. The specific lesion loci raise a number of questions to be discussed regarding: (a the selectivity of these regions and associated deficits to orthographic working memory vs. working memory more generally (b the possibility that different lesion sub-regions may correspond to different components of the OWM system.

  8. Brand name confusion: Subjective and objective measures of orthographic similarity.

    Science.gov (United States)

    Burt, Jennifer S; McFarlane, Kimberley A; Kelly, Sarah J; Humphreys, Michael S; Weatherall, Kimberlee; Burrell, Robert G

    2017-09-01

    Determining brand name similarity is vital in areas of trademark registration and brand confusion. Students rated the orthographic (spelling) similarity of word pairs (Experiments 1, 2, and 4) and brand name pairs (Experiment 5). Similarity ratings were consistently higher when words shared beginnings rather than endings, whereas shared pronunciation of the stressed vowel had small and less consistent effects on ratings. In Experiment 3 a behavioral task confirmed the similarity of shared beginnings in lexical processing. Specifically, in a task requiring participants to decide whether 2 words presented in the clear (a probe and a later target) were the same or different, a masked prime word preceding the target shortened response latencies if it shared its initial 3 letters with the target. The ratings of students for word and brand name pairs were strongly predicted by metrics of orthographic similarity from the visual word identification literature based on the number of shared letters and their relative positions. The results indicate a potential use for orthographic metrics in brand name registration and trademark law. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Resting-state functional connectivity of orthographic networks in acquired dysgraphia

    Directory of Open Access Journals (Sweden)

    Gali Ellenblum

    2015-05-01

    The NTA findings indicate that the relationship between orthographic and default-mode networks is characterized by greater within- vs. across-network connectivity. Furthermore, we show for the first time a pattern of increasing within/across network “coherence normalization” following spelling rehabilitation. Additional dysgraphic participants and other networks (language, sensory-motor, etc. will be analyzed to develop a better understanding of the RS orthographic network and its response to damage and recovery. Acknowledgements. The work is part of a multi-site, NIDCD-supported project examining language recovery neurobiology in aphasia (DC006740. We thank Melissa Greenberger and Xiao-Wei Song.

  10. Narrative and orthographic writing abilities in Elementary School students: characteristics and correlations.

    Science.gov (United States)

    Bigarelli, Juliana Faleiros Paolucci; Ávila, Clara Regina Brandão de

    2011-09-01

    To characterize, according to the school grade and the type of school (private or public), the performance on orthographic and narrative text production in the writing of Elementary School students with good academic performance, and to investigate the relationships between these variables. Participants were 160 children with ages between 8 and 12 years, enrolled in 4th to 7th grades Elementary School. Their written production was assessed using words and pseudowords dictation, and autonomous writing of a narrative text. Public school students had a higher number of errors in the words and pseudowords dictation, improving with education level. The occurrence of complete and incomplete utterances was similar in both public and private schools. However, 4th graders presented more incomplete statements than the other students. A higher number of overall microstructure and macrostructure productions occurred among private school students. The essential macrostructures were most frequently found in the later school grades. The higher the total number of words in the autonomous written production, the higher the occurrence of linguistic variables and the better the narrative competence. There was a weak negative correlation between the number of wrong words and the total of events in text production. Positive and negative correlations (from weak to good) were observed between different orthographic, linguistic and narrative production variables in both private and public schools. Private school students present better orthographic and narrative performance than public school students. Schooling progression influences the performance in tasks of words' writing and text production, and the orthographic abilities influence the quality of textual production. Different writing abilities, such as orthographic performance and use of linguistic elements and narrative structures, are mutually influenced in writing production.

  11. Neural bases of orthographic long-term memory and working memory in dysgraphia.

    Science.gov (United States)

    Rapp, Brenda; Purcell, Jeremy; Hillis, Argye E; Capasso, Rita; Miceli, Gabriele

    2016-02-01

    Spelling a word involves the retrieval of information about the word's letters and their order from long-term memory as well as the maintenance and processing of this information by working memory in preparation for serial production by the motor system. While it is known that brain lesions may selectively affect orthographic long-term memory and working memory processes, relatively little is known about the neurotopographic distribution of the substrates that support these cognitive processes, or the lesions that give rise to the distinct forms of dysgraphia that affect these cognitive processes. To examine these issues, this study uses a voxel-based mapping approach to analyse the lesion distribution of 27 individuals with dysgraphia subsequent to stroke, who were identified on the basis of their behavioural profiles alone, as suffering from deficits only affecting either orthographic long-term or working memory, as well as six other individuals with deficits affecting both sets of processes. The findings provide, for the first time, clear evidence of substrates that selectively support orthographic long-term and working memory processes, with orthographic long-term memory deficits centred in either the left posterior inferior frontal region or left ventral temporal cortex, and orthographic working memory deficits primarily arising from lesions of the left parietal cortex centred on the intraparietal sulcus. These findings also contribute to our understanding of the relationship between the neural instantiation of written language processes and spoken language, working memory and other cognitive skills. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Analysis of Pseudohomophone Orthographic Errors through Functional Magnetic Resonance Imaging (fMRI).

    Science.gov (United States)

    Guardia-Olmos, Joan; Zarabozo-Hurtado, Daniel; Peró-Cebollero, Maribe; Gudayol-Farré, Esteban; Gómez-Velázquez, Fabiola R; González-Garrido, Andrés

    2017-12-04

    The study of orthographic errors in a transparent language such as Spanish is an important topic in relation to writing acquisition because in Spanish it is common to write pseudohomophones as valid words. The main objective of the present study was to explore the possible differences in activation patterns in brain areas while processing pseudohomophone orthographic errors between participants with high (High Spelling Skills (HSS)) and low (Low Spelling Skills (LSS)) spelling orthographic abilities. We hypothesize that (a) the detection of orthographic errors will activate bilateral inferior frontal gyri, and that (b) this effect will be greater in the HSS group. Two groups of 12 Mexican participants, each matched by age, were formed based on their results in a group of spelling-related ad hoc tests: HSS and LSS groups. During the fMRI session, two experimental tasks were applied involving correct and pseudohomophone substitution of Spanish words. First, a spelling recognition task and second a letter searching task. The LSS group showed, as expected, a lower number of correct responses (F(1, 21) = 52.72, p right inferior frontal gyrus in HSS group during the spelling task. However, temporal, frontal, and subcortical brain regions of the LSS group were activated during the same task.

  13. Orthographic Transparency Modulates the Functional Asymmetry in the Fusiform Cortex: An Artificial Language Training Study

    Science.gov (United States)

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; He, Qinghua; Zhang, Mingxia; Xue, Feng; Chen, Chuansheng; Dong, Qi

    2013-01-01

    The laterality difference in the occipitotemporal region between Chinese (bilaterality) and alphabetic languages (left laterality) has been attributed to their difference in visual appearance. However, these languages also differ in orthographic transparency. To disentangle the effect of orthographic transparency from visual appearance, we trained…

  14. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  15. Early processing of orthographic language membership information in bilingual visual word recognition: Evidence from ERPs.

    Science.gov (United States)

    Hoversten, Liv J; Brothers, Trevor; Swaab, Tamara Y; Traxler, Matthew J

    2017-08-01

    For successful language comprehension, bilinguals often must exert top-down control to access and select lexical representations within a single language. These control processes may critically depend on identification of the language to which a word belongs, but it is currently unclear when different sources of such language membership information become available during word recognition. In the present study, we used event-related potentials to investigate the time course of influence of orthographic language membership cues. Using an oddball detection paradigm, we observed early neural effects of orthographic bias (Spanish vs. English orthography) that preceded effects of lexicality (word vs. pseudoword). This early orthographic pop-out effect was observed for both words and pseudowords, suggesting that this cue is available prior to full lexical access. We discuss the role of orthographic bias for models of bilingual word recognition and its potential role in the suppression of nontarget lexical information. Published by Elsevier Ltd.

  16. Orthographic and Semantic Processing in Young Readers

    Science.gov (United States)

    Polse, Lara R.; Reilly, Judy S.

    2015-01-01

    This investigation examined orthographic and semantic processing during reading acquisition. Children in first to fourth grade were presented with a target word and two response alternatives, and were asked to identify the semantic match. Words were presented in four conditions: an exact match and unrelated foil (STONE-STONE-EARS), an exact match…

  17. Dutch dyslexic adolescents: phonological-core variable-orthographic differences

    NARCIS (Netherlands)

    Bekebrede, J.; van der Leij, A.; Share, D.L.

    2009-01-01

    The phonological-core variable-orthographic differences (PCVOD) model [van der Leij, & Morfidi (2006). Journal of Learning Disabilities, 39, 74-90] has been proposed as an explanation for the heterogeneity among dyslexic readers in their profiles of reading-related subskills. The predictions of this

  18. Identifying the Unique Role of Orthographic Working Memory in a Componential Model of Hong Kong Kindergarteners' Chinese Written Spelling

    Science.gov (United States)

    Mo, Jianhong; McBride, Catherine; Yip, Laiying

    2018-01-01

    We sought to test a componential model of Chinese written spelling, including the role of orthographic working memory (OWM), among Hong Kong kindergartners. One hundred seventeen kindergartners were recruited. OWM was measured using a visual orthographic judgment and a delayed copying task. Orthographic knowledge, semantic knowledge, and…

  19. Orthographic vs. Phonologic Syllables in Handwriting Production

    Science.gov (United States)

    Kandel, Sonia; Herault, Lucie; Grosjacques, Geraldine; Lambert, Eric; Fayol, Michel

    2009-01-01

    French children program the words they write syllable by syllable. We examined whether the syllable the children use to segment words is determined phonologically (i.e., is derived from speech production processes) or orthographically. Third, 4th and 5th graders wrote on a digitiser words that were mono-syllables phonologically (e.g.…

  20. Children Develop Initial Orthographic Knowledge during Storybook Reading

    Science.gov (United States)

    Apel, Kenn; Brimo, Danielle; Wilson-Fowler, Elizabeth B.; Vorstius, Christian; Radach, Ralph

    2013-01-01

    We examined whether young children acquire orthographic knowledge during structured adult-led storybook reading even though minimal viewing time is devoted to print. Sixty-two kindergarten children were read 12 storybook "chapters" while their eye movements were tracked. Results indicated that the children quickly acquired initial mental…

  1. Implicit learning out of the lab: the case of orthographic regularities.

    Science.gov (United States)

    Pacton, S; Perruchet, P; Fayol, M; Cleeremans, A

    2001-09-01

    Children's (Grades 1 to 5) implicit learning of French orthographic regularities was investigated through nonword judgment (Experiments 1 and 2) and completion (Experiments 3a and 3b) tasks. Children were increasingly sensitive to (a) the frequency of double consonants (Experiments 1, 2, and 3a), (b) the fact that vowels can never be doubled (Experiment 2), and (c) the legal position of double consonants (Experiments 2 and 3b). The latter effect transferred to never doubled consonants but with a decrement in performance. Moreover, this decrement persisted without any trend toward fading, even after the massive amounts of experience provided by years of practice. This result runs against the idea that transfer to novel material is indicative of abstract rule-based knowledge and suggests instead the action of mechanisms sensitive to the statistical properties of the material. A connectionist model is proposed as an instantiation of such mechanisms.

  2. Orthographic learning and self-teaching in a bilingual and biliterate context.

    Science.gov (United States)

    Schwartz, Mila; Kahn-Horwitz, Janina; Share, David L

    2014-01-01

    The aim of this study was to examine self-teaching in the context of English as a foreign language literacy acquisition. Three groups comprising 88 sixth-grade children participated. The first group consisted of Russian-Hebrew-speaking bilinguals who had acquired basic reading skills in Russian as their first language (L1) and literacy and who were literate in Hebrew as a second language. The second group consisted of Russian-Hebrew-speaking bilinguals who had not learned to read in their native Russian but had acquired Hebrew as their first literate language. The third group consisted of Hebrew-speaking monolingual children who were literate in Hebrew. This design facilitated examining the effect of biliteracy and bilingualism on basic English reading skills. We hypothesized that due to the proximity between the Russian and English orthographies as opposed to the Hebrew-English "distance," the Russian-Hebrew-speaking biliterate group who acquired basic reading and spelling skills in L1 Russian would have superior self-teaching in English as opposed to the two other groups. The standard two-session self-teaching paradigm was employed with naming (speed and accuracy) and orthographic choice as posttest measures of orthographic learning. Results showed that after 4 years of English instruction, all three groups showed evidence of self-teaching on naming speed and orthographic recognition. The Russian-Hebrew-speaking biliterate group, moreover, showed a partial advantage over the comparison groups for initial decoding of target pseudowords and clear-cut superiority for measures of later orthographic learning, thereby showing self-teaching while supporting the script dependence hypothesis. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Acquiring Orthographic Processing through Word Reading: Evidence from Children Learning to Read French and English

    Science.gov (United States)

    Pasquarella, Adrian; Deacon, Helene; Chen, Becky X.; Commissaire, Eva; Au-Yeung, Karen

    2014-01-01

    This study examined the within-language and cross-language relationships between orthographic processing and word reading in French and English across Grades 1 and 2. Seventy-three children in French Immersion completed measures of orthographic processing and word reading in French and English in Grade 1 and Grade 2, as well as a series of control…

  4. Dutch Dyslexic Adolescents: Phonological-Core Variable-Orthographic Differences

    Science.gov (United States)

    Bekebrede, Judith; van der Leij, Aryan; Share, David L.

    2009-01-01

    The phonological-core variable-orthographic differences (PCVOD) model [van der Leij, & Morfidi (2006). "Journal of Learning Disabilities," 39, 74-90] has been proposed as an explanation for the heterogeneity among dyslexic readers in their profiles of reading-related subskills. The predictions of this model were investigated in a…

  5. [Phonological and orthographic processes of reading and spelling in young adolescents and adults with and without dyslexia in German and English: impact on foreign language learning].

    Science.gov (United States)

    Romonath, Roswitha; Wahn, Claudia; Gregg, Noel

    2005-01-01

    The present study addressed the question whether there is a relationship between phonological and orthographic processes of reading and spelling in adolescents and young adults with and without dyslexia in German and English. On the evidence of the Linguistic Coding Differences Hypothesis and results of the latest research in foreign language learning the hypothesis is tested if there is a relationship between phonological and orthographic knowledge on the one hand and decoding and spelling performance on the other hand in German adolescents and young adults reading and spelling German and English words. This hypothesis was tested with the statistical method of structural equation modeling and therefore the research population was divided into the following groups: group 1 with dyslexia in reading (n = 93), group 2 with dyslexia in spelling (n = 93), group 3 without dyslexia in reading (n = 95), and group 4 without dyslexia in spelling (n = 95). Results of data analysis show that the postulated prediction model fits only the data of the dyslexia group for reading and spelling, but not for the control group. Also the model for both groups does not fit. The results of the pilot study show that it is necessary to modify diagnostic instruments of measurement and to separate scales of phonological and orthographic processes.

  6. Argonne Code Center: compilation of program abstracts

    Energy Technology Data Exchange (ETDEWEB)

    Butler, M.K.; DeBruler, M.; Edwards, H.S.

    1976-08-01

    This publication is the tenth supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the document are as follows: preface; history and acknowledgements; abstract format; recommended program package contents; program classification guide and thesaurus; and abstract collection. (RWR)

  7. Argonne Code Center: compilation of program abstracts

    International Nuclear Information System (INIS)

    Butler, M.K.; DeBruler, M.; Edwards, H.S.

    1976-08-01

    This publication is the tenth supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the document are as follows: preface; history and acknowledgements; abstract format; recommended program package contents; program classification guide and thesaurus; and abstract collection

  8. Code of conduct for scientists (abstract)

    International Nuclear Information System (INIS)

    Khurshid, S.J.

    2011-01-01

    The emergence of advanced technologies in the last three decades and extraordinary progress in our knowledge on the basic Physical, Chemical and Biological properties of living matter has offered tremendous benefits to human beings but simultaneously highlighted the need of higher awareness and responsibility by the scientists of 21 century. Scientist is not born with ethics, nor science is ethically neutral, but there are ethical dimensions to scientific work. There is need to evolve an appropriate Code of Conduct for scientist particularly working in every field of Science. However, while considering the contents, promulgation and adaptation of Codes of Conduct for Scientists, a balance is needed to be maintained between freedom of scientists and at the same time some binding on them in the form of Code of Conducts. The use of good and safe laboratory procedures, whether, codified by law or by common practice must also be considered as part of the moral duties of scientists. It is internationally agreed that a general Code of Conduct can't be formulated for all the scientists universally, but there should be a set of 'building blocks' aimed at establishing the Code of Conduct for Scientists either as individual researcher or responsible for direction, evaluation, monitoring of scientific activities at the institutional or organizational level. (author)

  9. Argonne Code Center: compilation of program abstracts

    Energy Technology Data Exchange (ETDEWEB)

    Butler, M.K.; DeBruler, M.; Edwards, H.S.; Harrison, C. Jr.; Hughes, C.E.; Jorgensen, R.; Legan, M.; Menozzi, T.; Ranzini, L.; Strecok, A.J.

    1977-08-01

    This publication is the eleventh supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the complete document ANL-7411 are as follows: preface, history and acknowledgements, abstract format, recommended program package contents, program classification guide and thesaurus, and the abstract collection. (RWR)

  10. Argonne Code Center: compilation of program abstracts

    International Nuclear Information System (INIS)

    Butler, M.K.; DeBruler, M.; Edwards, H.S.; Harrison, C. Jr.; Hughes, C.E.; Jorgensen, R.; Legan, M.; Menozzi, T.; Ranzini, L.; Strecok, A.J.

    1977-08-01

    This publication is the eleventh supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the complete document ANL-7411 are as follows: preface, history and acknowledgements, abstract format, recommended program package contents, program classification guide and thesaurus, and the abstract collection

  11. What Spelling Tells Us about the Orthographic Development and Word Study Instruction with Emergent Bilingual Secondary Students

    Science.gov (United States)

    Kiernan, Darl; Bear, Donald R.

    2018-01-01

    Educators need ways to assess orthographic knowledge and differentiate word study instruction for secondary, emergent bilingual learners. In this study, the spelling of 199 students in grades 7-12 across eight features and four spelling stages was examined to understand students' orthographic development; all but two were learning Spanish and…

  12. Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.

    Science.gov (United States)

    Acha, Joana; Perea, Manuel

    2010-01-01

    Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.

  13. Dual Coding Theory, Word Abstractness, and Emotion: A Critical Review of Kousta et al. (2011)

    Science.gov (United States)

    Paivio, Allan

    2013-01-01

    Kousta, Vigliocco, Del Campo, Vinson, and Andrews (2011) questioned the adequacy of dual coding theory and the context availability model as explanations of representational and processing differences between concrete and abstract words. They proposed an alternative approach that focuses on the role of emotional content in the processing of…

  14. Orthographic Reading Deficits in Dyslexic Japanese Children: Examining the Transposed-Letter Effect in the Color-Word Stroop Paradigm.

    Science.gov (United States)

    Ogawa, Shino; Shibasaki, Masahiro; Isomura, Tomoko; Masataka, Nobuo

    2016-01-01

    In orthographic reading, the transposed-letter effect (TLE) is the perception of a transposed-letter position word such as "cholocate" as the correct word "chocolate." Although previous studies on dyslexic children using alphabetic languages have reported such orthographic reading deficits, the extent of orthographic reading impairment in dyslexic Japanese children has remained unknown. This study examined the TLE in dyslexic Japanese children using the color-word Stroop paradigm comprising congruent and incongruent Japanese hiragana words with correct and transposed-letter positions. We found that typically developed children exhibited Stroop effects in Japanese hiragana words with both correct and transposed-letter positions, thus indicating the presence of TLE. In contrast, dyslexic children indicated Stroop effects in correct letter positions in Japanese words but not in transposed, which indicated an absence of the TLE. These results suggest that dyslexic Japanese children, similar to dyslexic children using alphabetic languages, may also have a problem with orthographic reading.

  15. Differentiation of perceptual and semantic subsequent memory effects using an orthographic paradigm.

    Science.gov (United States)

    Kuo, Michael C C; Liu, Karen P Y; Ting, Kin Hung; Chan, Chetwyn C H

    2012-11-27

    This study aimed to differentiate perceptual and semantic encoding processes using subsequent memory effects (SMEs) elicited by the recognition of orthographs of single Chinese characters. Participants studied a series of Chinese characters perceptually (by inspecting orthographic components) or semantically (by determining the object making sounds), and then made studied or unstudied judgments during the recognition phase. Recognition performance in terms of d-prime measure in the semantic condition was higher, though not significant, than that of the perceptual condition. The between perceptual-semantic condition differences in SMEs at P550 and late positive component latencies (700-1000ms) were not significant in the frontal area. An additional analysis identified larger SME in the semantic condition during 600-1000ms in the frontal pole regions. These results indicate that coordination and incorporation of orthographic information into mental representation is essential to both task conditions. The differentiation was also revealed in earlier SMEs (perceptual>semantic) at N3 (240-360ms) latency, which is a novel finding. The left-distributed N3 was interpreted as more efficient processing of meaning with semantically learned characters. Frontal pole SMEs indicated strategic processing by executive functions, which would further enhance memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Effective Instruction for Persisting Dyslexia in Upper Grades: Adding Hope Stories and Computer Coding to Explicit Literacy Instruction.

    Science.gov (United States)

    Thompson, Robert; Tanimoto, Steve; Lyman, Ruby Dawn; Geselowitz, Kira; Begay, Kristin Kawena; Nielsen, Kathleen; Nagy, William; Abbott, Robert; Raskind, Marshall; Berninger, Virginia

    2018-05-01

    Children in grades 4 to 6 ( N =14) who despite early intervention had persisting dyslexia (impaired word reading and spelling) were assessed before and after computerized reading and writing instruction aimed at subword, word, and syntax skills shown in four prior studies to be effective for treating dyslexia. During the 12 two-hour sessions once a week after school they first completed HAWK Letters in Motion© for manuscript and cursive handwriting, HAWK Words in Motion© for phonological, orthographic, and morphological coding for word reading and spelling, and HAWK Minds in Motion© for sentence reading comprehension and written sentence composing. A reading comprehension activity in which sentences were presented one word at a time or one added word at a time was introduced. Next, to instill hope they could overcome their struggles with reading and spelling, they read and discussed stories about struggles of Buckminister Fuller who overcame early disabilities to make important contributions to society. Finally, they engaged in the new Kokopelli's World (KW)©, blocks-based online lessons, to learn computer coding in introductory programming by creating stories in sentence blocks (Tanimoto and Thompson 2016). Participants improved significantly in hallmark word decoding and spelling deficits of dyslexia, three syntax skills (oral construction, listening comprehension, and written composing), reading comprehension (with decoding as covariate), handwriting, orthographic and morphological coding, orthographic loop, and inhibition (focused attention). They answered more reading comprehension questions correctly when they had read sentences presented one word at a time (eliminating both regressions out and regressions in during saccades) than when presented one added word at a time (eliminating only regressions out during saccades). Indicators of improved self-efficacy that they could learn to read and write were observed. Reminders to pay attention and stay on task

  17. Nonword repetition in adults who stutter: The effects of stimuli stress and auditory-orthographic cues.

    Directory of Open Access Journals (Sweden)

    Geoffrey A Coalson

    Full Text Available Adults who stutter (AWS are less accurate in their immediate repetition of novel phonological sequences compared to adults who do not stutter (AWNS. The present study examined whether manipulation of the following two aspects of traditional nonword repetition tasks unmask distinct weaknesses in phonological working memory in AWS: (1 presentation of stimuli with less-frequent stress patterns, and (2 removal of auditory-orthographic cues immediately prior to response.Fifty-two participants (26 AWS, 26 AWNS produced 12 bisyllabic nonwords in the presence of corresponding auditory-orthographic cues (i.e., immediate repetition task, and the absence of auditory-orthographic cues (i.e., short-term recall task. Half of each cohort (13 AWS, 13 AWNS were exposed to the stimuli with high-frequency trochaic stress, and half (13 AWS, 13 AWNS were exposed to identical stimuli with lower-frequency iambic stress.No differences in immediate repetition accuracy for trochaic or iambic nonwords were observed for either group. However, AWS were less accurate when recalling iambic nonwords than trochaic nonwords in the absence of auditory-orthographic cues.Manipulation of two factors which may minimize phonological demand during standard nonword repetition tasks increased the number of errors in AWS compared to AWNS. These findings suggest greater vulnerability in phonological working memory in AWS, even when producing nonwords as short as two syllables.

  18. Error-related negativities during spelling judgments expose orthographic knowledge.

    Science.gov (United States)

    Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin

    2014-02-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. The Effects of Orthographic Pattern Intervention on Spelling Performance of Students with Reading Disabilities: A Best Evidence Synthesis

    Science.gov (United States)

    Squires, Katie E.; Wolter, Julie A.

    2016-01-01

    Although the orthographic processing skill of recognizing and producing letters and letter patterns has been established as an important skill for developing spelling, a majority of the research focus has been on early orthographic intervention that did not progress beyond the unit of the letter. The purpose of this article is to provide a best…

  20. Automatization and Orthographic Development in Second Language Visual Word Recognition

    Science.gov (United States)

    Kida, Shusaku

    2016-01-01

    The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…

  1. Orthographic errors in written Tshivenḓa on funeral programmes of ...

    African Journals Online (AJOL)

    This article seeks to analyse a sample of twenty-five programmes collected from different undertakers over the period January 2014 to June 2015 with the aim of identifying orthographic mistakes and suggesting ways of correcting them, using the Rules of Orthography and Pronunciation for Tshivenḓa by the Pan South ...

  2. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Lateralized effects of orthographical irregularity and auditory memory load on the kinematics of transcription typewriting.

    Science.gov (United States)

    Bloemsaat, Gijs; Van Galen, Gerard P; Meulenbroek, Ruud G J

    2003-05-01

    This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly spelt target word elicited a specific key press sequence of either the left or right index finger. In the other trials regularly spelt target words elicited the same key press sequence. An auditory memory load was added in half the trials by asking participants to remember the pitch of a tone during task performance. Orthographical irregularity was expected to slow down performance. Auditory memory load, viewed as a low level stressor, was expected to affect performance only when orthographically irregular words needed to be typed. The hypotheses were confirmed. Additional analysis showed differential effects on the left and right hand, possibly related to verbal-manual interference and hand dominance. The results are discussed in relation to relevant findings of recent neuroimaging studies.

  4. Discrimination of English and French Orthographic Patterns by Biliterate Children

    Science.gov (United States)

    Jared, Debra; Cormier, Pierre; Levy, Betty Ann; Wade-Woolley, Lesly

    2013-01-01

    We investigated whether young English-French biliterate children can distinguish between English and French orthographic patterns. Children in French immersion programs were asked to play a dictionary game when they were in Grade 2 and again when they were in Grade 3. They were shown pseudowords that contained either an English spelling pattern or…

  5. Integration of orthographic, conceptual, and episodic information on implicit and explicit tests.

    Science.gov (United States)

    Weldon, M S; Massaro, D W

    1996-03-01

    An experiment was conducted to determine how orthographic and conceptual information are integrated during incidental and intentional retrieval. Subjects studied word lists with either a shallow (counting vowels) or deep (rating pleasantness) processing task, then received either an implicit or explicit word fragment completion (WFC) test. At test, word fragments contained 0, 1, 2, or 4 letters, and were accompanied by 0, 1, 2, or 3 semantically related words. On both the implicit and explicit tests, performance improved with increases in the numbers of letters and words. When semantic cues were presented with the word fragments, the implicit test became more conceptually drive. Still, conceptual processing had a larger effect in intentional than in incidental retrieval. The Fuzzy Logical Model of Perception (FLMP) provided a good description of how orthographic, semantic, and episodic information were combined during retrieval.

  6. The Role of Orthographic Neighborhood Size Effects in Chinese Word Recognition

    Science.gov (United States)

    Li, Meng-Feng; Lin, Wei-Chun; Chou, Tai-Li; Yang, Fu-Ling; Wu, Jei-Tun

    2015-01-01

    Previous studies about the orthographic neighborhood size (NS) in Chinese have overlooked the morphological processing, and the co-variation between the character frequency and the the NS. The present study manipulated the word frequency and the NS simultaneously, with the leading character frequency controlled, to explore their influences on word…

  7. Relative Ease in Creating Detailed Orthographic Representations Contrasted with Severe Difficulties to Maintain Them in Long-term Memory Among Dyslexic Children.

    Science.gov (United States)

    Binamé, Florence; Danzio, Sophie; Poncelet, Martine

    2015-11-01

    Most research into orthographic learning abilities has been conducted in English with typically developing children using reading-based tasks. In the present study, we examined the abilities of French-speaking children with dyslexia to create novel orthographic representations for subsequent use in spelling and to maintain them in long-term memory. Their performance was compared with that of chronological age (CA)-matched and reading age (RA)-matched control children. We used an experimental task designed to provide optimal learning conditions (i.e. 10 spelling practice trials) ensuring the short-term acquisition of the spelling of the target orthographic word forms. After a 1-week delay, the long-term retention of the targets was assessed by a spelling post-test. Analysis of the results revealed that, in the short term, children with dyslexia learned the novel orthographic word forms well, only differing from both CA and RA controls on the initial decoding of the targets and from CA controls on the first two practice trials. In contrast, a dramatic drop was observed in their long-term retention relative to CA and RA controls. These results support the suggestion of the self-teaching hypothesis (Share, 1995) that initial errors in the decoding and spelling of unfamiliar words may hinder the establishment of fully specified novel orthographic representations. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  9. Artificial Grammar Learning in Dyslexic and Nondyslexic Adults: Implications for Orthographic Learning

    Science.gov (United States)

    Samara, Anna; Caravolas, Markéta

    2017-01-01

    Potential implicit orthographic learning deficits were investigated in adults with dyslexia. An artificial grammar learning paradigm served to assess dyslexic and typical readers' ability to exploit information about chunk frequency, letter-position patterns, and specific string similarity, all of which have analogous constructs in real…

  10. Letter position coding across modalities: braille and sighted reading of sentences with jumbled words.

    Science.gov (United States)

    Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo

    2015-04-01

    This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.

  11. Evidence for cross-script abstract identities in learners of Japanese kana.

    Science.gov (United States)

    Schubert, Teresa; Gawthrop, Roderick; Kinoshita, Sachiko

    2018-05-07

    The presence of abstract letter identity representations in the Roman alphabet has been well documented. These representations are invariant to letter case (upper vs. lower) and visual appearance. For example, "a" and "A" are represented by the same abstract identity. Recent research has begun to consider whether the processing of non-Roman orthographies also involves abstract orthographic representations. In the present study, we sought evidence for abstract identities in Japanese kana, which consist of two scripts, hiragana and katakana. Abstract identities would be invariant to the script used as well as to the degree of visual similarity. We adapted the cross-case masked-priming letter match task used in previous research on Roman letters, by presenting cross-script kana pairs and testing adult beginning -to- intermediate Japanese second-language (L2) learners (first-language English readers). We found robust cross-script priming effects, which were equal in magnitude for visually similar (e.g., り/リ) and dissimilar (e.g., あ/ア) kana pairs. This pattern was found despite participants' imperfect explicit knowledge of the kana names, particularly for katakana. We also replicated prior findings from Roman abstract letter identities in the same participants. Ours is the first study reporting abstract kana identity priming (in adult L2 learners). Furthermore, these representations were acquired relatively early in our adult L2 learners.

  12. THE ORTHOGRAPHIC NORM IN SECONDARY SCHOOL STUDENTS’ WRITTEN ASSIGNMENTS

    Directory of Open Access Journals (Sweden)

    Ivana Đorđev

    2016-06-01

    Full Text Available This paper presents the results of research conducted with the primary objective to determine in which areas secondary school students usually make orthographic mistakes when writing (official written assignments. Starting from the hypothesis that the punctuation writing of whole and split words are areas in which secondary school students (regardless of age and school orientation achieved the weakest achievements an (exploratory research was conducted on a corpus of 3,135 written assignments written in the school year of 2010/11. The research sample was intentional, descriptive and analytical methods were used for the description and the analysis of the results. The results showed the following (1 secondary school students usually make mistakes in punctuation of written assignments - we recorded 4,487 errors in the use of signs to denote intonation and meaning of a text (errors of this type make 53.93% of the total number of spelling errors reported in the corpus of research; by frequency of errors the second are errors related to writing whole and split words (11.02%, the third error is in the use of the capital letter (9.34%; (2 most problems in orthography have second grade students, quantum of mistakes is almost the same with first graders and seniors, but in all grades the most frequent errors are in punctuation, writing of whole and split words and the use of capital letters; (3 Although school orientation affects the spelling skills of pupils, the weakest orthographic achievements are also recorded in punctuation, writing of whole and split words and capitalization, so those are areas that need to be thoroughly addressed in teaching and methodology literature. The results are, on the one hand, a picture of the current status of teaching orthography and grammar knowledge of secondary school students. On the other hand, the research results can be applied in all phases of methodical practical work in teaching orthography, the upgrading the

  13. German and English Bodies: No Evidence for Cross-Linguistic Differences in Preferred Orthographic Grain Size

    Directory of Open Access Journals (Sweden)

    Xenia Schmalz

    2017-03-01

    Full Text Available Previous studies have found that words and nonwords with many body neighbours (i.e., words with the same orthographic body, e.g., 'cat, brat, at' are read faster than items with fewer body neighbours. This body-N effect has been explored in the context of cross-linguistic differences in reading where it has been reported that the size of the effect differs as a function of orthographic depth: readers of English, a deep orthography, show stronger facilitation than readers of German, a shallow orthography. Such findings support the psycholinguistic grain size theory, which proposes that readers of English rely on large orthographic units to reduce ambiguity of print-to-speech correspondences in their orthography. Here we re-examine the evidence for this pattern and find that there is no reliable evidence for such a cross-linguistic difference. Re-analysis of a key study (Ziegler et al., 2001, analysis of data from the English Lexicon Project (Balota et al., 2007, and a large-scale analysis of nine new experiments all support this conclusion. Using Bayesian analysis techniques, we find little evidence of the body-N effect in most tasks and conditions. Where we do find evidence for a body-N effect (lexical decision for nonwords, we find evidence against an interaction with language.

  14. Training Letter and Orthographic Pattern Recognition in Children with Slow Naming Speed

    Science.gov (United States)

    Conrad, Nicole J.; Levy, Betty Ann

    2011-01-01

    Although research has established that performance on a rapid automatized naming (RAN) task is related to reading, the nature of this relationship is unclear. Bowers (2001) proposed that processes underlying performance on the RAN task and orthographic knowledge make independent and additive contributions to reading performance. We examined the…

  15. S’instruire des paroles d’élèves au sujet de l’étude de l’orthographe lexicale

    Directory of Open Access Journals (Sweden)

    CJAL * RCLA Levesque, Gaté, Saint - Pierre et Mansour Revue canadienne de linguistique appliquée : 18, 1 (2015 : 39 - 62 42 S’instruire des paroles d’élèves au sujet de l’étude de l’orthographe lexical e Jean - Yves Levesque

    2015-06-01

    Full Text Available Résumé La présente recherche s’est intéressée à la situation d’étude de l’orthographe lexicale à la maison au moyen d’entretiens auprès de 272 élèves du premier cycle du primaire. Les résultats ont révélé différentes manières d’opérer chez les élèves pour s’acquitter de cette tâche. Si une bonne part des élèves consacrait du temps pour étudier l’orthographe à la maison, 4 élèves sur 10 se faisaient dicter les mots par un parent sans consacrer préalablement du temps d’étude. Diverses stratégies sont évoquées par les élèves pour encoder l’orthographe, mais ce sont les filles et les élèves sans difficulté qui ont déclaré utiliser le plus de stratégies et celles-ci étaient davantage diversifiées que chez les garçons et les élèves en difficulté. Les élèves ont rapporté que les stratégies étaient le plus souvent utilisées en copiant les mots avec modèle. C’étaient majoritairement les mères qui soutenaient les enfants au regard de cette tâche scolaire à domicile. Abstract This study focuses on the learning process of vocabulary spelling at home. Data were collected using interviews with 272 students of Grades 1 and 2. Results reveal that students accomplish this task in different ways. A large proportion of students spend time studying spelling at home. However, 4 out of 10 students do not study vocabulary words prior to their parents reading them out loud. Students use various strategies to learn spelling, but girls and students without difficulties reported using more strategies compared to boys and students with difficulties. Students indicated that strategies are most often used when they copy words using a model. It is mostly mothers who support children during spelling study at home.

  16. Measuring Spanish Orthographic Development in Private, Public and Subsidised Schools in Chile

    Science.gov (United States)

    Helman, Lori; Delbridge, Anne; Parker, David; Arnal, Martina; Jara Mödinger, Luz

    2016-01-01

    The current study has a twofold purpose: first, to determine the reliability of a tool for assessing orthographic development in Spanish; second, to assess differences in students' performance on the measure across multiple types of primary schools in a large city in Chile. A Spanish developmental spelling inventory that contained words of…

  17. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  18. A New Pose Estimation Algorithm Using a Perspective-Ray-Based Scaled Orthographic Projection with Iteration.

    Directory of Open Access Journals (Sweden)

    Pengfei Sun

    Full Text Available Pose estimation aims at measuring the position and orientation of a calibrated camera using known image features. The pinhole model is the dominant camera model in this field. However, the imaging precision of this model is not accurate enough for an advanced pose estimation algorithm. In this paper, a new camera model, called incident ray tracking model, is introduced. More importantly, an advanced pose estimation algorithm based on the perspective ray in the new camera model, is proposed. The perspective ray, determined by two positioning points, is an abstract mathematical equivalent of the incident ray. In the proposed pose estimation algorithm, called perspective-ray-based scaled orthographic projection with iteration (PRSOI, an approximate ray-based projection is calculated by a linear system and refined by iteration. Experiments on the PRSOI have been conducted, and the results demonstrate that it is of high accuracy in the six degrees of freedom (DOF motion. And it outperforms three other state-of-the-art algorithms in terms of accuracy during the contrast experiment.

  19. Does phonological recoding occur during silent reading and is it necessary for orthographic learning?

    NARCIS (Netherlands)

    de Jong, P.F.; Bitter, D.J.L.; van Setten, M.; Marinus, E.

    2009-01-01

    Two studies were conducted to test the central claim of the self-teaching hypothesis (i.e., phonological recoding is necessary for orthographic learning) in silent reading. The first study aimed to demonstrate the use of phonological recoding during silent reading. Texts containing pseudowords were

  20. Chicken or Egg? Untangling the Relationship between Orthographic Processing Skill and Reading Accuracy

    Science.gov (United States)

    Deacon, S. Helene; Benere, Jenna; Castles, Anne

    2012-01-01

    There is increasing evidence of a relationship between orthographic processing skill, or the ability to form, store and access word representations, and reading ability. Empirical research to date has not, however, clarified the direction of this relationship. We examined this question in a three-year longitudinal study of children from Grades 1…

  1. CAS (CHEMICAL ABSTRACTS SOCIETY) PARAMETER CODES and Other Data from FIXED STATIONS and Other Platforms from 19890801 to 19891130 (NODC Accession 9700156)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Tissue, sediment, and Chemical Abstracts Society (CAS) parameter codes were collected from Columbia River Basin and other locations from 01 August 1989 to 30...

  2. Lexical decision performance in developmental surface dysgraphia: Evidence for a unitary orthographic system that is used in both reading and spelling.

    Science.gov (United States)

    Sotiropoulos, Andreas; Hanley, J Richard

    The relationship between spelling, written word recognition, and picture naming is investigated in a study of seven bilingual adults who have developmental surface dysgraphia in both Greek (their first language) and English (their second language). Four of the cases also performed poorly at orthographic lexical decision in both languages. This finding is consistent with similar results in Italian that have been taken as evidence of a developmental impairment to a single orthographic system that is used for both reading and spelling. The remaining three participants performed well at orthographic lexical decision. At first sight, preserved lexical decision in surface dysgraphia is less easy to explain in terms of a shared orthographic system. However, the results of subsequent experiments showed clear parallels between the nature of the reading and spelling difficulties that these three individuals experienced, consistent with the existence of a single orthographic system. The different patterns that were observed were consistent with the claims of Friedmann and Lukov (2008. Developmental surface dyslexias. Cortex, 44, 1146-1160) that several distinct sub-types of developmental surface dyslexia exist. We show that individual differences in spelling in surface dysgraphia are also consistent with these sub-types; there are different developmental deficits that can give rise, in an individual, to a combination of surface dyslexia and dysgraphia. Finally, we compare the theoretical framework used by Friedmann and her colleagues that is based upon the architecture of the DRC model with an account that relies instead upon the Triangle model of reading].

  3. Coding of obesity in administrative hospital discharge abstract data: accuracy and impact for future research studies.

    Science.gov (United States)

    Martin, Billie-Jean; Chen, Guanmin; Graham, Michelle; Quan, Hude

    2014-02-13

    Obesity is a pervasive problem and a popular subject of academic assessment. The ability to take advantage of existing data, such as administrative databases, to study obesity is appealing. The objective of our study was to assess the validity of obesity coding in an administrative database and compare the association between obesity and outcomes in an administrative database versus registry. This study was conducted using a coronary catheterization registry and an administrative database (Discharge Abstract Database (DAD)). A Body Mass Index (BMI) ≥30 kg/m2 within the registry defined obesity. In the DAD obesity was defined by diagnosis codes E65-E68 (ICD-10). The sensitivity, specificity, negative predictive value (NPV) and positive predictive value (PPV) of an obesity diagnosis in the DAD was determined using obesity diagnosis in the registry as the referent. The association between obesity and outcomes was assessed. The study population of 17380 subjects was largely male (68.8%) with a mean BMI of 27.0 kg/m2. Obesity prevalence was lower in the DAD than registry (2.4% vs. 20.3%). A diagnosis of obesity in the DAD had a sensitivity 7.75%, specificity 98.98%, NPV 80.84% and PPV 65.94%. Obesity was associated with decreased risk of death or re-hospitalization, though non-significantly within the DAD. Obesity was significantly associated with an increased risk of cardiac procedure in both databases. Overall, obesity was poorly coded in the DAD. However, when coded, it was coded accurately. Administrative databases are not an optimal datasource for obesity prevalence and incidence surveillance but could be used to define obese cohorts for follow-up.

  4. Orthographic facilitation in oral vocabulary acquisition.

    Science.gov (United States)

    Ricketts, Jessie; Bishop, Dorothy V M; Nation, Kate

    2009-10-01

    An experiment investigated whether exposure to orthography facilitates oral vocabulary learning. A total of 58 typically developing children aged 8-9 years were taught 12 nonwords. Children were trained to associate novel phonological forms with pictures of novel objects. Pictures were used as referents to represent novel word meanings. For half of the nonwords children were additionally exposed to orthography, although they were not alerted to its presence, nor were they instructed to use it. After this training phase a nonword-picture matching posttest was used to assess learning of nonword meaning, and a spelling posttest was used to assess learning of nonword orthography. Children showed robust learning for novel spelling patterns after incidental exposure to orthography. Further, we observed stronger learning for nonword-referent pairings trained with orthography. The degree of orthographic facilitation observed in posttests was related to children's reading levels, with more advanced readers showing more benefit from the presence of orthography.

  5. The Role of Attention Shifting in Orthographic Competencies: Cross-Sectional Findings from 1st, 3rd, and 8th Grade Students

    Directory of Open Access Journals (Sweden)

    Antje von Suchodoletz

    2017-09-01

    Full Text Available Attention shifting refers to one core component of executive functions, a set of higher-order cognitive processes that predict different aspects of academic achievement. To date, few studies have investigated the role of attention shifting in orthographic competencies during middle childhood and early adolescence. In the present study, 69 first-grade, 121 third-grade, and 85 eighth-grade students' attention shifting was tested with a computer version of the Dimensional Change Card Sort (DCCS; Zelazo, 2006. General spelling skills and specific writing and spelling strategies were assessed with the Hamburger Writing Test (May, 2002. Results suggested associations between attention shifting and various orthographic competencies that differ across age groups and by sex. Across all age groups, better attention shifting was associated with less errors in applying alphabetical strategies. In third graders, better attention shifting was furthermore related to better general spelling skills and less errors in using orthographical strategies. In this age group, associations did not differ by sex. Among first graders, attention shifting was negatively related to general spelling skills, but only for boys. In contrast, attention shifting was positively related to general spelling skills in eighth graders, but only for girls. Finally, better attention shifting was associated with less case-related errors in eighth graders, independent of students' sex. In sum, the data provide insight into both variability and consistency in the pattern of relations between attention shifting and various orthographic competencies among elementary and middle school students.

  6. Data Abstraction in GLISP.

    Science.gov (United States)

    Novak, Gordon S., Jr.

    GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…

  7. Lateralized effects of orthographical irregularity and auditory memory load on the kinematics of transciption typewriting

    NARCIS (Netherlands)

    Bloemsaat, J.G.; Galen, G.P. van; Meulenbroek, R.G.J.

    2003-01-01

    This study investigated the combined effects of orthographical irregularity and auditory memory load on the kinematics of finger movements in a transcription-typewriting task. Eight right-handed touch-typists were asked to type 80 strings of ten seven-letter words. In half the trials an irregularly

  8. L’orthographe : des systèmes aux usages

    OpenAIRE

    Fayol, Michel; Jaffré, Jean-Pierre

    2016-01-01

    Ce texte propose un éclairage à la fois linguistique et psycholinguistique sur l’orthographe. Au-delà de la spécificité inhérente, par définition, aux deux domaines, il illustre la complémentarité épistémologique qui s’est développée entre eux au cours des dernières décennies. La psycholinguistique a très souvent fait appel aux descriptions linguistiques pour élaborer ses hypothèses de travail et, de son côté, la linguistique s’est inspirée à maintes reprises des observations psycholinguistiq...

  9. Locus of Word Frequency Effects in Spelling to Dictation: Still at the Orthographic Level!

    Science.gov (United States)

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-01-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological…

  10. Orthographic Mapping in the Acquisition of Sight Word Reading, Spelling Memory, and Vocabulary Learning

    Science.gov (United States)

    Ehri, Linnea C.

    2014-01-01

    Orthographic mapping (OM) involves the formation of letter-sound connections to bond the spellings, pronunciations, and meanings of specific words in memory. It explains how children learn to read words by sight, to spell words from memory, and to acquire vocabulary words from print. This development is portrayed by Ehri (2005a) as a sequence of…

  11. Argument structure and the representation of abstract semantics.

    Directory of Open Access Journals (Sweden)

    Javier Rodríguez-Ferreiro

    Full Text Available According to the dual coding theory, differences in the ease of retrieval between concrete and abstract words are related to the exclusive dependence of abstract semantics on linguistic information. Argument structure can be considered a measure of the complexity of the linguistic contexts that accompany a verb. If the retrieval of abstract verbs relies more on the linguistic codes they are associated to, we could expect a larger effect of argument structure for the processing of abstract verbs. In this study, sets of length- and frequency-matched verbs including 40 intransitive verbs, 40 transitive verbs taking simple complements, and 40 transitive verbs taking sentential complements were presented in separate lexical and grammatical decision tasks. Half of the verbs were concrete and half were abstract. Similar results were obtained in the two tasks, with significant effects of imageability and transitivity. However, the interaction between these two variables was not significant. These results conflict with hypotheses assuming a stronger reliance of abstract semantics on linguistic codes. In contrast, our data are in line with theories that link the ease of retrieval with availability and robustness of semantic information.

  12. Argument structure and the representation of abstract semantics.

    Science.gov (United States)

    Rodríguez-Ferreiro, Javier; Andreu, Llorenç; Sanz-Torrent, Mònica

    2014-01-01

    According to the dual coding theory, differences in the ease of retrieval between concrete and abstract words are related to the exclusive dependence of abstract semantics on linguistic information. Argument structure can be considered a measure of the complexity of the linguistic contexts that accompany a verb. If the retrieval of abstract verbs relies more on the linguistic codes they are associated to, we could expect a larger effect of argument structure for the processing of abstract verbs. In this study, sets of length- and frequency-matched verbs including 40 intransitive verbs, 40 transitive verbs taking simple complements, and 40 transitive verbs taking sentential complements were presented in separate lexical and grammatical decision tasks. Half of the verbs were concrete and half were abstract. Similar results were obtained in the two tasks, with significant effects of imageability and transitivity. However, the interaction between these two variables was not significant. These results conflict with hypotheses assuming a stronger reliance of abstract semantics on linguistic codes. In contrast, our data are in line with theories that link the ease of retrieval with availability and robustness of semantic information.

  13. CAS (CHEMICAL ABSTRACTS SOCIETY) PARAMETER CODES and Other Data from MULTIPLE SHIPS From Caribbean Sea and Others from 19790205 to 19890503 (NCEI Accession 9100017)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The accession contains Chemical Abstracts Society (CAS) parameter codes and Other Data from MULTIPLE SHIPS From Caribbean Sea and Gulf of Mexico from February 5,...

  14. Shared orthographic neuronal representations for spelling and reading.

    Science.gov (United States)

    Purcell, Jeremy J; Jiang, Xiong; Eden, Guinevere F

    2017-02-15

    A central question in the study of the neural basis of written language is whether reading and spelling utilize shared orthographic representations. While recent studies employing fMRI to test this question report that the left inferior frontal gyrus (IFG) and ventral occipitotemporal cortex (vOTC) are active during both spelling and reading in the same subjects (Purcell et al., 2011a; Rapp and Lipka, 2011), the spatial resolution of fMRI limits the interpretation of these findings. Specifically, it is unknown if the neurons which encode orthography for reading are also involved in spelling of the same words. Here we address this question by employing an event-related functional magnetic resonance imaging-adaptation (fMRI-A) paradigm designed to examine shared orthographic representations across spelling and reading. First, we identified areas that independently showed adaptation to reading, and adaptation to spelling. Then we identified spatial convergence for these two separate maps via a conjunction analysis. Consistent with previous studies (Purcell et al., 2011a; Rapp and Lipka, 2011), this analysis revealed the left dorsal IFG, vOTC and supplementary motor area. To further validate these observations, we then interrogated these regions using an across-task adaptation technique, and found adaptation across reading and spelling in the left dorsal IFG (BA 44/9). Our final analysis focused specifically on the Visual Word Form Area (VWFA) in the vOTC, whose variability in location among subjects requires the use of subject-specific identification mechanisms (Glezer and Riesenhuber, 2013). Using a functional localizer for reading, we defined the VWFA in each subject, and found adaptation effects for both within the spelling and reading conditions, respectively, as well as across spelling and reading. Because none of these effects were observed during a phonological/semantic control condition, we conclude that the left dorsal IFG and VWFA are involved in accessing

  15. Impaired Orthographic Processing in Chinese Dyslexic Children: Evidence from the Lexicality Effect on N400

    Science.gov (United States)

    Tzeng, Yu-Lin; Hsu, Chun-Hsien; Lin, Wan-Hsuan; Lee, Chia-Ying

    2017-01-01

    This study used the lexicality effects on N400 to investigate orthographic processing in children with developmental dyslexia. Participants performed a Go/No-Go semantic judgment task; three types of stimuli--real characters (RC), pseudocharacters (PC), and noncharacters (NC)--were embedded in No-Go trials. Two types of lexicality effects (RC vs.…

  16. Evidence from neglect dyslexia for morphological decomposition at the early stages of orthographic-visual analysis

    Science.gov (United States)

    Reznick, Julia; Friedmann, Naama

    2015-01-01

    This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159

  17. Is phonology bypassed in normal or dyslexic development?

    Science.gov (United States)

    Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D

    1987-01-01

    A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill

  18. Development of neural basis for chinese orthographic neighborhood size effect.

    Science.gov (United States)

    Zhao, Jing; Li, Qing-Lin; Ding, Guo-Sheng; Bi, Hong-Yan

    2016-02-01

    The brain activity of orthographic neighborhood size (N size) effect in Chinese character naming has been studied in adults, meanwhile behavioral studies have revealed a developmental trend of Chinese N-size effect in developing readers. However, it is unclear whether and how the neural mechanism of N-size effect changes in Chinese children along with development. Here we address this issue using functional magnetic resonance imaging. Forty-four students from the 3(rd) , 5(th) , and 7(th) grades were scanned during silent naming of Chinese characters. After scanning, all participants took part in an overt naming test outside the scanner, and results of the naming task showed that the 3(rd) graders named characters from large neighborhoods faster than those from small neighborhoods, revealing a facilitatory N-size effect; the 5(th) graders showed null N-size effect while the 7(th) graders showed an inhibitory N-size effect. Neuroimaging results revealed that only the 3(rd) graders exhibited a significant N-size effect in the left middle occipital activity, with greater activation for large N-size characters. Results of 5(th) and 7(th) graders showed significant N-size effects in the left middle frontal gyrus, in which 5(th) graders induced greater activation in large N-size condition than in small N-size condition, while 7(th) graders exhibited an opposite effect which was similar to the adult pattern reported in a previous study. The current findings suggested the transition from broadly tuned to finely tuned orthographic representation with reading development, and the inhibition from neighbors' phonology for higher graders. Hum Brain Mapp 37:632-647, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Effective connectivity of visual word recognition and homophone orthographic errors

    Science.gov (United States)

    Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve

    2015-01-01

    The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070

  20. Effects of Phonological and Orthographic Shifts on Children's Processing of Written Morphology: A Time-Course Study

    Science.gov (United States)

    Quémart, Pauline; Casalis, Séverine

    2014-01-01

    We report two experiments that investigated whether phonological and/or orthographic shifts in a base word interfere with morphological processing by French 3rd, 4th, and 5th graders and adults (as a control group) along the time course of visual word recognition. In both experiments, prime-target pairs shared four possible relationships:…

  1. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    Science.gov (United States)

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  2. Impredicative concurrent abstract predicates

    DEFF Research Database (Denmark)

    Svendsen, Kasper; Birkedal, Lars

    2014-01-01

    We present impredicative concurrent abstract predicates { iCAP { a program logic for modular reasoning about concurrent, higher- order, reentrant, imperative code. Building on earlier work, iCAP uses protocols to reason about shared mutable state. A key novel feature of iCAP is the ability to dene...

  3. Writing Strengthens Orthography and Alphabetic-Coding Strengthens Phonology in Learning to Read Chinese

    NARCIS (Netherlands)

    Guan, C.Q.; Liu, Y.; Chan, D.H.L.; Ye, F.F.; Perfetti, C.A.

    2011-01-01

    Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form.

  4. Phonologically-Based Priming in the Same-Different Task With L1 Readers.

    Science.gov (United States)

    Lupker, Stephen J; Nakayama, Mariko; Yoshihara, Masahiro

    2018-02-01

    The present experiment provides an investigation of a promising new tool, the masked priming same-different task, for investigating the orthographic coding process. Orthographic coding is the process of establishing a mental representation of the letters and letter order in the word being read which is then used by readers to access higher-level (e.g., semantic) information about that word. Prior research (e.g., Norris & Kinoshita, 2008) had suggested that performance in this task may be based entirely on orthographic codes. As reported by Lupker, Nakayama, and Perea (2015a), however, in at least some circumstances, phonological codes also play a role. Specifically, even though their 2 languages are completely different orthographically, Lupker et al.'s Japanese-English bilinguals showed priming in this task when masked L1 primes were phonologically similar to L2 targets. An obvious follow-up question is whether Lupker et al.'s effect might have resulted from a strategy that was adopted by their bilinguals to aid in processing of, and memory for, the somewhat unfamiliar L2 targets. In the present experiment, Japanese readers responded to (Japanese) Kanji targets with phonologically identical primes (on "related" trials) being presented in a completely different but highly familiar Japanese script, Hiragana. Once again, significant priming effects were observed, indicating that, although performance in the masked priming same-different task may be mainly based on orthographic codes, phonological codes can play a role even when the stimuli being matched are familiar words from a reader's L1. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. The Implications of Orthographic Intraference for the Teaching and Description of ESL: The Educated Nigerian English Examples (Implicaciones de la Intraferencia Ortográfica para la Enseñanza y Descripción del Inglés como Segunda Lengua: Ejemplos Inglés Nigeriano Formal)

    Science.gov (United States)

    Ekundayo, Omowumi Steve Bode

    2015-01-01

    This paper examines orthographic intraference and its implications for teaching and describing English as a second language (ESL). Orthographic intraference is used here to denote instances of single word spelling, acronyms, mix up of homophones, homonyms and compound word spelling arising not from interference but from orthographic rules and…

  6. SHAPE GENERATION BY MEANS OF A NEW METHOD OF ORTHOGRAPHIC REPRESENTATION ("PROEKTIVOGRAFIYA": DRAWINGS OF MULTI-COMPONENT POLYHEDRA

    Directory of Open Access Journals (Sweden)

    Andrey Ivashchenko Viktorovich

    2012-10-01

    Full Text Available The authors analyze the capabilities of a traditional set of shape generation techniques that employ orthographic representation in the generation of polyhedra with account for the advanced approach to the research of new multi-nuclear structures. In the past, designs based on one nucleus were used in practice. The use of two or more nuclei is considered in the article. In the most common case, the resulting system of planes will constitute multiple orthographic representations. The characteristics of a binuclear system depend on the mutual positions and relation of dimensions of the nuclei. In addition to regular parameters, complete description of the system need particular supplementary parameters that determine the mutual positions of the nuclei. Increase in the number of nuclei causes increase in the number of descriptive parameters. The authors provide examples of binuclear systems composed of tetrahedrons, cubes, and dodecahedrons, implemented in the Delphi medium. The results can be exported into any three-dimensional modeling system with a view to their further study and use.

  7. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    Science.gov (United States)

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  8. Order short-term memory is not specifically impaired in dyslexia and does not affect orthographic learning

    Directory of Open Access Journals (Sweden)

    Eva eStaels

    2014-09-01

    Full Text Available This article reports two studies that investigate short-term memory (STM deficits in dyslexic children and explores the relationship between short-term memory and reading acquisition. In the first experiment, thirty-six dyslexic children and sixty-one control children performed an item STM task and a serial order STM task. The results of this experiment show that dyslexic children do not suffer from a specific serial order STM deficit. In addition, the results demonstrate that phonological processing skills are as closely related to both item STM and serial order STM. However, nonverbal intelligence was more strongly involved in serial order STM than in item STM. In the second experiment, the same two STM tasks were administered and reading acquisition was assessed by measuring orthographic learning in a group of one hundred and eighty-eight children. The results of this study show that orthographic learning is exclusively related to item STM and not to order STM. It is concluded that serial order STM is not the right place to look for a causal explanation of reading disability, nor for differences in word reading acquisition.

  9. Effectiveness of Applying 2D Static Depictions and 3D Animations to Orthographic Views Learning in Graphical Course

    Science.gov (United States)

    Wu, Chih-Fu; Chiang, Ming-Chin

    2013-01-01

    This study provides experiment results as an educational reference for instructors to help student obtain a better way to learn orthographic views in graphical course. A visual experiment was held to explore the comprehensive differences between 2D static and 3D animation object features; the goal was to reduce the possible misunderstanding…

  10. Compilation of the abstracts of nuclear computer codes available at CPD/IPEN

    International Nuclear Information System (INIS)

    Granzotto, A.; Gouveia, A.S. de; Lourencao, E.M.

    1981-06-01

    A compilation of all computer codes available at IPEN in S.Paulo are presented. These computer codes are classified according to Argonne National Laboratory - and Energy Nuclear Agency schedule. (E.G.) [pt

  11. Coordinate transformations, orthographic projections, and robot kinematics

    International Nuclear Information System (INIS)

    Crochetiere, W.J.

    1984-01-01

    Humans do not consciously think of moving each of their joints while they move their hands from one place to another. Likewise, robot arms can be commanded to move about in cartesian space without the need to address the individual joints. To do this, the direct and inverse kinematic equations of any robot arm must be derived. The direct kinematic equations uniquely transform the joint positions into the position (and orientation) of the hand, whereas the inverse kinematic equations transform the position (and orientation) of the hand into joint positions. The derivation of the inverse kinematic equations for any particular robot is a difficult problem which may have more than one solution. In this paper, these equations are derived for a six degree of freedom robot arm. A combination of matrix operations to perform coordinate rotations, and trigonometry within the appropriate orthographic projects to perform coordinate translations is employed. This complementary approach yields a solution which is more easily obtained, and also more easily visualized. The resulting solution was programmed into a real-time computer as a part of a higher level software system to control the motion of the arm

  12. The impact of orthographic knowledge on speech processing

    Directory of Open Access Journals (Sweden)

    Régine Kolinsky

    2012-12-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2012n63p161   The levels-of-processing approach to speech processing (cf. Kolinsky, 1998 distinguishes three levels, from bottom to top: perception, recognition (which involves activation of stored knowledge and formal explicit analysis or comparison (which belongs to metalinguistic ability, and assumes that only the former is immune to literacy-dependent knowledge.  in this contribution, we first briefly review the main ideas and evidence supporting the role of learning to read in the alphabetic system in the development of conscious representations of phonemes, and we contrast conscious and unconscious representations of phonemes. Then, we examine in detail recent compelling behavioral and neuroscientific evidence for the involvement of orthographic representation in the recognition of spoken words. We conclude by arguing that there is a strong need of theoretical re-elaboration of the models of speech recognition, which typically have ignored the influence of reading acquisition.

  13. FUNCTIONAL AND EFFECTIVE CONNECTIVITY OF VISUAL WORD RECOGNITION AND HOMOPHONE ORTHOGRAPHIC ERRORS.

    Directory of Open Access Journals (Sweden)

    JOAN eGUÀRDIA-OLMOS

    2015-05-01

    Full Text Available The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional Magnetic Resonance Imaging (fMRI, has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad-hoc spelling-related out-scanner tests: a High Spelling Skills group (HSS and a Low Spelling Skills group (LSS. During the fMRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task. Regions of Interest (ROIs and their signal values were obtained for both tasks. Based on these values, SEMs (Structural Equation Models were obtained for each group of spelling competence (HSS and LSS and task through ML (Maximum Likelihood estimation, and the model with the best fit was chosen in each case. Likewise, DCM (Dynamic Causal Models were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.

  14. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  15. The Effect of Orthographic Depth on Letter String Processing: The Case of Visual Attention Span and Rapid Automatized Naming

    Science.gov (United States)

    Antzaka, Alexia; Martin, Clara; Caffarra, Sendy; Schlöffel, Sophie; Carreiras, Manuel; Lallier, Marie

    2018-01-01

    The present study investigated whether orthographic depth can increase the bias towards multi-letter processing in two reading-related skills: visual attention span (VAS) and rapid automatized naming (RAN). VAS (i.e., the number of visual elements that can be processed at once in a multi-element array) was tested with a visual 1-back task and RAN…

  16. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words

    Science.gov (United States)

    Haddad, Laurice; Weiss, Yael; Katzir, Tami; Bitan, Tali

    2018-01-01

    Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks) provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27) and 5th (N = 29) grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks), and half in the non-transparent version (without diacritic marks). Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older readers. Moreover

  17. Orthographic Transparency Enhances Morphological Segmentation in Children Reading Hebrew Words

    Directory of Open Access Journals (Sweden)

    Laurice Haddad

    2018-01-01

    Full Text Available Morphological processing of derived words develops simultaneously with reading acquisition. However, the reader’s engagement in morphological segmentation may depend on the language morphological richness and orthographic transparency, and the readers’ reading skills. The current study tested the common idea that morphological segmentation is enhanced in non-transparent orthographies to compensate for the absence of phonological information. Hebrew’s rich morphology and the dual version of the Hebrew script (with and without diacritic marks provides an opportunity to study the interaction of orthographic transparency and morphological segmentation on the development of reading skills in a within-language design. Hebrew speaking 2nd (N = 27 and 5th (N = 29 grade children read aloud 96 noun words. Half of the words were simple mono-morphemic words and half were bi-morphemic derivations composed of a productive root and a morphemic pattern. In each list half of the words were presented in the transparent version of the script (with diacritic marks, and half in the non-transparent version (without diacritic marks. Our results show that in both groups, derived bi-morphemic words were identified more accurately than mono-morphemic words, but only for the transparent, pointed, script. For the un-pointed script the reverse was found, namely, that bi-morphemic words were read less accurately than mono-morphemic words, especially in second grade. Second grade children also read mono-morphemic words faster than bi-morphemic words. Finally, correlations with a standardized measure of morphological awareness were found only for second grade children, and only in bi-morphemic words. These results, showing greater morphological effects in second grade compared to fifth grade children suggest that for children raised in a language with a rich morphology, common and easily segmented morphemic units may be more beneficial for younger compared to older

  18. Action and perception in literacy: A common-code for spelling and reading.

    Science.gov (United States)

    Houghton, George

    2018-01-01

    There is strong evidence that reading and spelling in alphabetical scripts depend on a shared representation (common-coding). However, computational models usually treat the two skills separately, producing a wide variety of proposals as to how the identity and position of letters is represented. This article treats reading and spelling in terms of the common-coding hypothesis for perception-action coupling. Empirical evidence for common representations in spelling-reading is reviewed. A novel version of the Start-End Competitive Queuing (SE-CQ) spelling model is introduced, and tested against the distribution of positional errors in Letter Position Dysgraphia, data from intralist intrusion errors in spelling to dictation, and dysgraphia because of nonperipheral neglect. It is argued that no other current model is equally capable of explaining this range of data. To pursue the common-coding hypothesis, the representation used in SE-CQ is applied, without modification, to the coding of letter identity and position for reading and lexical access, and a lexical matching rule for the representation is proposed (Start End Position Code model, SE-PC). Simulations show the model's compatibility with benchmark findings from form priming, its ability to account for positional effects in letter identification priming and the positional distribution of perseverative intrusion errors. The model supports the view that spelling and reading use a common orthographic description, providing a well-defined account of the major features of this representation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Does Top-Down Feedback Modulate the Encoding of Orthographic Representations During Visual-Word Recognition?

    Science.gov (United States)

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta

    2016-09-01

    In masked priming lexical decision experiments, there is a matched-case identity advantage for nonwords, but not for words (e.g., ERTAR-ERTAR words when top-down feedback is minimized. We employed a task that taps prelexical orthographic processes: the masked prime same-different task. For "same" trials, results showed faster response times for targets when preceded by a briefly presented matched-case identity prime than when preceded by a mismatched-case identity prime. Importantly, this advantage was similar in magnitude for nonwords and words. This finding constrains the interplay of bottom-up versus top-down mechanisms in models of visual-word identification.

  20. Codificação fonológica e ortográfica na dislexia de desenvolvimento: evidência de um estudo de caso

    Directory of Open Access Journals (Sweden)

    Cláudia Cardoso Martins

    2014-09-01

    Full Text Available The present study investigates the phonological and orthographic skills of a Brazilian Portuguese-speaking child with a history of persistent reading difficulties (Age = 12 years and 8 months old. Fifteen typical readers with similar reading ability participated as controls (Mean Age = 8 years and 7 months old. Phonological and orthographic coding skills were evaluated through the ability to spell words that varied with regard to the more or less regular nature of their letter-sound correspondences. Results question the hypothesis that orthographic coding skills are superior to phonological coding skills in developmental dyslexia. Although the reading disabled child performed similarly to controls on words containing contextual rules, her performance was substantially inferior on words containing sounds whose spelling is ambiguous. Results also suggest that, relative to typical readers, dyslexic readers may have difficulty in making use of morphosyntactic regularities to spell words.

  1. « Simplifier l’orthographe, oui, mais... ». Attentes, réserves et ambivalences dans les discours d’enseignant.e.s sur ce que serait une « bonne » réforme de l’orthographe française

    Directory of Open Access Journals (Sweden)

    2012-07-01

    Full Text Available L’orthographe française est source de difficultés, d’insécurité, de fierté et d’attitudes d’ambivalentes. Elle est également, en raison même de cette ambivalence des attitudes à son égard, l’objet de débats récurrents sur l’opportunité d’une réforme. Un groupe de recherche international a entrepris d’évaluer l’existence d’une demande sociale en matière de réforme de l’orthographe du français. Une enquête par questionnaires a été menée dans 6 pays francophones auprès de plus de 1700 enseignants et futurs enseignants de français. À des degrés divers selon les pays et le statut professionnel des sujets, une majorité des personnes est favorable à une rationalisation du système graphique du français. Ainsi, de façon schématique, les réponses obtenues en Algérie et au Maroc sont plus favorables à une réforme que celles recueillies dans les pays francophones du nord (Belgique, France, Québec, Suisse. Les enseignants en poste sont plus disposés à accepter une réforme que les futurs enseignants encore en formation, et plus particulièrement les enseignants du premier degré (directement et quotidiennement confrontés à l’enseignement des règles que ceux du secondaire. De façon plus ciblée, la présente contribution s’intéresse à ce que les témoins répondent à la question de savoir ce que serait pour eux/elles une « bonne » réforme de l’orthographe. Pour ce faire, nous avons mené une analyse thématique des discours recueillis dans les réponses libres à une question ouverte. Cela nous a permis d’identifier des propositions que nous avons regroupé en blocs thématiques, au sein desquels on peut dégager deux grandes tendances. En effet il ressort des discours des répondants une polarisation marquée autour de deux attitudes. •L’une, pragmatique, se manifeste dans des propos favorables à une simplification de certains points de la norme graphique, principalement

  2. KWIC Index of nuclear codes (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-01-01

    It is a KWIC Index for 254 nuclear codes in the Nuclear Code Abstracts (1975 edition). The classification of nuclear codes and the form of index are the same as those in the Computer Programme Library at Ispra, Italy. (auth.)

  3. Alemayehu Yismaw Demamu Abstract Ethiopia overhauled its ...

    African Journals Online (AJOL)

    Abstract. Ethiopia overhauled its arbitration laws with the enactment of the Civil Code and .... 2 United Nations Commission on International Trade Law, UNCITRAL Model Law on International Commercial ...... investment agreement between Ethiopia and Great Britain and Northern Ireland under Article 8, Ethiopia and.

  4. Development of orthographic knowledge and the foundations of literacy a memorial festschrift for Edmund H. Henderson

    CERN Document Server

    Templeton, Shane

    2013-01-01

    This volume unites spelling and word recognition -- two areas that have largely remained theoretically and empirically distinct. Despite considerable advances in the investigation of processes underlying word perception and the acknowledgement of the seminal importance of lexical access in the reading and writing processes, to date the development and functioning of orthographic knowledge across both encoding and decoding contexts has rarely been explored. The book begins to fill this void by offering a coherent and unified articulation of the perceptual, linguistic, and cognitive fea

  5. A Sociolinguistic Study of Deviant Orthographic Representation of Graduating Students' Names in a Nigerian University

    Directory of Open Access Journals (Sweden)

    Oladunjoye J. Faleye

    2012-01-01

    Full Text Available It is habitual for graduating students of the Obafemi Awolowo University, Ile-Ife, Nigeria, to roll out the drums the very day they finish writing their final examination. Characteristic of such a ritualistic exercise, among other things, are the brand names the students coin for themselves from their original names. This study focuses on the creative rewriting of the names on such an occasion and examines the linguistic habits exhibited therein. It analyses the phonological/graphematic features that mark the rewritng of the names and discusses the sociolinguistic implications for the phenomena of social identity construction and language contact situation. Data for the study was sourced mainly through participant-observation technique with a supplemment of an oral interview conducted for some of the subjects between year 2007 and 2009. The data was selected through a purposive random sampling technique which yielded fifty names that were considered representative of the respelling conventions. The paper employs mainly Hempenstall's (2003 Phonological Sensitivity Skills to analyse the linguistic practices in the reconfigured names and then applies Tajfel's and Turner's (1979 Social Identity Theory to explain how it is that people develop a sense of membership and belonging in particular groups. The article reveals that the deviant orthographic conventions are a major fallout of youth culture with great influence from computer-mediated communication. It also shows that their linguistic experimentation foray in the discourse greatly undermines the orthographic system of the indigenous language (Yoruba and the cultural values embedded in the original names.

  6. FCG: a code generator for lazy functional languages

    NARCIS (Netherlands)

    Kastens, U.; Langendoen, K.G.; Hartel, Pieter H.; Pfahler, P.

    1992-01-01

    The FCGcode generator produces portable code that supports efficient two-space copying garbage collection. The code generator transforms the output of the FAST compiler front end into an abstract machine code. This code explicitly uses a call stack, which is accessible to the garbage collector. In

  7. Abstract feature codes: The building blocks of the implicit learning system.

    Science.gov (United States)

    Eberhardt, Katharina; Esser, Sarah; Haider, Hilde

    2017-07-01

    According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  9. An orthographic effect in phoneme processing, and its limitations

    Directory of Open Access Journals (Sweden)

    Anne eCutler

    2012-02-01

    Full Text Available To examine whether lexically stored knowledge about spelling influences phoneme evaluation, we conducted three experiments with a low-level phonetic judgement task: phoneme goodness rating. In each experiment, listeners heard phonetic tokens varying along a continuum centred on /s/, occurring finally in isolated word or nonword tokens. An effect of spelling appeared in Experiment 1: Native English speakers’ goodness ratings for the best /s/ tokens were significantly higher in words spelled with S (e.g., bless than in words spelled with C (e.g., voice. No such difference appeared when nonnative speakers rated the same materials in Experiment 2, indicating that the difference could not be due to acoustic characteristics of the S- versus C-words. In Experiment 3, nonwords with lexical neighbours consistently spelled with S (e.g., pless versus with C (e.g., floice failed to elicit orthographic neighbourhood effects; no significant difference appeared in native English speakers’ ratings for the S-consistent versus the C-consistent sets. Obligatory influence of lexical knowledge on phonemic processing would have predicted such neighbourhood effects; the findings are thus better accommodated by models in which phonemic decisions draw strategically upon lexical information.

  10. An abstract model of rogue code insertion into radio frequency wireless networks. The effects of computer viruses on the Program Management Office

    Science.gov (United States)

    Feudo, Christopher V.

    1994-04-01

    This dissertation demonstrates that inadequately protected wireless LANs are more vulnerable to rogue program attack than traditional LANs. Wireless LANs not only run the same risks as traditional LANs, but they also run additional risks associated with an open transmission medium. Intruders can scan radio waves and, given enough time and resources, intercept, analyze, decipher, and reinsert data into the transmission medium. This dissertation describes the development and instantiation of an abstract model of the rogue code insertion process into a DOS-based wireless communications system using radio frequency (RF) atmospheric signal transmission. The model is general enough to be applied to widely used target environments such as UNIX, Macintosh, and DOS operating systems. The methodology and three modules, the prober, activator, and trigger modules, to generate rogue code and insert it into a wireless LAN were developed to illustrate the efficacy of the model. Also incorporated into the model are defense measures against remotely introduced rogue programs and a cost-benefit analysis that determined that such defenses for a specific environment were cost-justified.

  11. On the accessibility of phonological, orthographic, and semantic aspects of second language vocabulary learning and their relationship with spatial and linguistic intelligences

    Directory of Open Access Journals (Sweden)

    Abbas Ali Zarei

    2015-01-01

    Full Text Available The present study was an attempt to investigate the differences in the accessibility of phonological, semantic, and orthographic aspects of words in L2 vocabulary learning. For this purpose, a sample of 119 Iranian intermediate level EFL students in a private language institute in Karaj was selected. All of the participants received the same instructional treatment. At the end of the experimental period, three tests were administered based on the previously-taught words. A subset of Gardner’s’ (1983 Multiple Intelligences questionnaire was also used for data collection. A repeated measures one-way ANOVA procedure was used to analyze the obtained data. The results showed significant differences in the accessibility of phonological, semantic, and orthographic aspects of words in second language vocabulary learning. Moreover, to investigate the relationships between spatial and linguistic intelligences and the afore-mentioned aspects of lexical knowledge, a correlational analysis was used. No significant relationships were found between spatial and linguistic intelligences and the three aspects of lexical knowledge. These findings may have theoretical and pedagogical implications for researchers, teachers, and learners.

  12. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  13. Learning Orthographic Structure With Sequential Generative Neural Networks.

    Science.gov (United States)

    Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco

    2016-04-01

    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain. Copyright © 2015 Cognitive Science Society, Inc.

  14. Allograph priming is based on abstract letter identities: Evidence from Japanese kana.

    Science.gov (United States)

    Kinoshita, Sachiko; Schubert, Teresa; Verdonschot, Rinus G

    2018-04-23

    It is well-established that allographs like the uppercase and lowercase forms of the Roman alphabet (e.g., a and A) map onto the same "abstract letter identity," orthographic representations that are independent of the visual form. Consistent with this, in the allograph match task ("Are 'a' and 'A' the same letter?"), priming by a masked letter prime is equally robust for visually dissimilar prime-target pairs (e.g., d and D) and similar pairs (e.g., c and C). However, in principle this pattern of priming is also consistent with the possibility that allograph priming is purely phonological, based on the letter name. Because different allographic forms of the same letter, by definition, share a letter name, it is impossible to rule out this possibility a priori. In the present study, we investigated the influence of shared letter names by taking advantage of the fact that Japanese is written in two distinct writing systems, syllabic kana-that has two parallel forms, hiragana and katakana-and logographic kanji. Using the allograph match task, we tested whether a kanji prime with the same pronunciation as the target kana (e.g., - い, both pronounced /i/) produces the same amount of priming as a kana prime in the opposite kana form (e.g., イ- い). We found that the kana primes produced substantially greater priming than the phonologically identical kanji prime, which we take as evidence that allograph priming is based on abstract kana identity, not purely phonology. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. 49. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie. Abstracts

    International Nuclear Information System (INIS)

    2014-01-01

    The conference proceedings of the 48. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie contain abstracts on the following issues: neuro-oncological imaging, multimodal imaging concepts, subcranial imaging, spinal codes, interventional neuroradiology.

  16. HIFSuite: Tools for HDL Code Conversion and Manipulation

    Directory of Open Access Journals (Sweden)

    Bombieri Nicola

    2010-01-01

    Full Text Available Abstract HIFSuite ia a set of tools and application programming interfaces (APIs that provide support for modeling and verification of HW/SW systems. The core of HIFSuite is the HDL Intermediate Format (HIF language upon which a set of front-end and back-end tools have been developed to allow the conversion of HDL code into HIF code and vice versa. HIFSuite allows designers to manipulate and integrate heterogeneous components implemented by using different hardware description languages (HDLs. Moreover, HIFSuite includes tools, which rely on HIF APIs, for manipulating HIF descriptions in order to support code abstraction/refinement and postrefinement verification.

  17. The effect of decreased interletter spacing on orthographic processing.

    Science.gov (United States)

    Montani, Veronica; Facoetti, Andrea; Zorzi, Marco

    2015-06-01

    There is growing interest in how perceptual factors such as the spacing between letters within words modulate performance in visual word recognition and reading aloud. Extra-large letter spacing can strongly improve the reading performance of dyslexic children, and a small increase with respect to the standard spacing seems beneficial even for skilled word recognition in adult readers. In the present study we examined the effect of decreased letter spacing on perceptual identification and lexical decision tasks. Identification in the decreased spacing condition was slower than identification of normally spaced strings, thereby confirming that the reciprocal interference among letters located in close proximity (crowding) poses critical constraints on visual word processing. Importantly, the effect of spacing was not modulated by string length, suggesting that the locus of the spacing effect is at the level of letter detectors. Moreover, the processing of crowded letters was facilitated by top-down support from orthographic lexical representation as indicated by the fact that decreased spacing affected pseudowords significantly more than words. Conversely, in the lexical decision task only word responses were affected by the spacing manipulation. Overall, our findings support the hypothesis that increased crowding is particularly harmful for phonological decoding, thereby adversely affecting reading development in dyslexic children.

  18. Diagnósticos sobre problemas ortográficos. Una experiencia educativa / Diagnosis of orthographic problems. An educational experience

    Directory of Open Access Journals (Sweden)

    Charo Ríos

    2010-06-01

    Full Text Available Resumen: El estudio de la ortografía resulta a veces tan árido para los alumnos como desalentador para los profesores. El desánimo cunde por la sensación de enfrentarse a un enmarañado reglamento. Sin embargo, la ortografía se puede estructurar a fin de discernir sus problemas concretos y así poder superarlos más fácilmente. Nuestroproyecto presenta un método claro para detectar esos problemas ortográficos. Para ello hemos diseñado un “Informe sobre problemas ortográficos detectados”, una plantilla personalizada y diacrónica en la que aparecen, estructurados, los problemas ortográficos más habituales que el profesor debe detectar y señalar. Ese documento remite decimalmente a un sencillo y práctico “Resumen de las reglas ortográficas”.Summary: The study of spelling is sometimes as arid for the pupils as discouraging for the teachers. The sensation of facing an entangled regulation causes discouragement. However, spelling rules can be structured in order to discern the specific problems more easily. Our project presents a clear method to detect orthographic problems. Withthis purpose, we have designed a "Report on spelling problems detected", a personalized report containing the most habitual orthographic problems that the teacher must detect. This document refers to a practical and simple "Summary of spelling rules".

  19. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  20. The Representation of Abstract Words: Why Emotion Matters

    Science.gov (United States)

    Kousta, Stavroula-Thaleia; Vigliocco, Gabriella; Vinson, David P.; Andrews, Mark; Del Campo, Elena

    2011-01-01

    Although much is known about the representation and processing of concrete concepts, knowledge of what abstract semantics might be is severely limited. In this article we first address the adequacy of the 2 dominant accounts (dual coding theory and the context availability model) put forward in order to explain representation and processing…

  1. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  2. Compilation of the nuclear codes available in CTA

    International Nuclear Information System (INIS)

    D'Oliveira, A.B.; Moura Neto, C. de; Amorim, E.S. do; Ferreira, W.J.

    1979-07-01

    The present work is a compilation of some nuclear codes available in the Divisao de Estudos Avancados of the Instituto de Atividades Espaciais, (EAV/IAE/CTA). The codes are organized as the classification given by the Argonne National Laboratory. In each code are given: author, institution of origin, abstract, programming language and existent bibliography. (Author) [pt

  3. 6. Sınıf Öğrencilerinin Yazım Yanlışları Sıklığı ve Yazım Yanlışlarının Nedenlerine İlişkin Öğretmen Görüşleri 6th Grade Students’ Frequency Of Orthographic Mistakes And Teachers’ View About The Reasons For Orthographic Mistakes

    Directory of Open Access Journals (Sweden)

    Ahmet AKKAYA

    2013-07-01

    Full Text Available Orthographic mistake is one of the important elements leading to communication problems in written expression, and is generally one of the written expression acquisitions attempted to be taught students in educational institutions. This paper aims to reveal 6th grade students’ frequency of orthographic mistakes in their exam papers and to find out Turkish language teachers’ views about the reasons for orthographic mistakes. To do so, 336 exam papers of 6th grade students, collected both from two schools in Adıyaman city center and a village school, have been examined. 4142 orthographic mistakes have been identified. It is seen that students do not use symbols, acronyms/abbreviations, circumflex and proper nouns, do not miswrite complementary verb conjugations, and write accurately the words formed by such sound events as vowel shortening and epenthesis. It is identified that 6th grade students have made mistakes mostly in capitalization. Moreover, writing of consonants and vowels, syncopation, the conjunction of ‘de/da’ meaning ‘also’ in English are among other most repeated mistakes. In addition, the open-ended question “What are the 6th grade students’ reasons for orthographic mistakes?” has been asked to 16 Turkish language teachers so as to find out students’ reasons for orthographic mistakes. Content analysis method has been employed to analyze the acquired data. Those Turkish language teachers have responded that the 6th grade students’ reasons for orthographic mistakes are caused by lack of interest and attention, deficiency in basic language acquisition, lack of speaking in Turkish properly and efficiently outside school, exam types or teacher’s exam evaluations, and deficiency of either teachers or students in efficient utilization of sources about orthographic rules. Besides, this paper also suggests to identify the orthographic mistakes of students other than 6th grade, present the orthographic mistakes

  4. Planning for Evolution in a Production Environment: Migration from a Legacy Geometry Code to an Abstract Geometry Modeling Language in STAR

    Science.gov (United States)

    Webb, Jason C.; Lauret, Jerome; Perevoztchikov, Victor

    2012-12-01

    Increasingly detailed descriptions of complex detector geometries are required for the simulation and analysis of today's high-energy and nuclear physics experiments. As new tools for the representation of geometry models become available during the course of an experiment, a fundamental challenge arises: how best to migrate from legacy geometry codes developed over many runs to the new technologies, such as the ROOT/TGeo [1] framework, without losing touch with years of development, tuning and validation. One approach, which has been discussed within the community for a number of years, is to represent the geometry model in a higher-level language independent of the concrete implementation of the geometry. The STAR experiment has used this approach to successfully migrate its legacy GEANT 3-era geometry to an Abstract geometry Modelling Language (AgML), which allows us to create both native GEANT 3 and ROOT/TGeo implementations. The language is supported by parsers and a C++ class library which enables the automated conversion of the original source code to AgML, supports export back to the original AgSTAR[5] representation, and creates the concrete ROOT/TGeo geometry implementation used by our track reconstruction software. In this paper we present our approach, design and experience and will demonstrate physical consistency between the original AgSTAR and new AgML geometry representations.

  5. Structured LDPC Codes over Integer Residue Rings

    Directory of Open Access Journals (Sweden)

    Mo Elisa

    2008-01-01

    Full Text Available Abstract This paper presents a new class of low-density parity-check (LDPC codes over represented by regular, structured Tanner graphs. These graphs are constructed using Latin squares defined over a multiplicative group of a Galois ring, rather than a finite field. Our approach yields codes for a wide range of code rates and more importantly, codes whose minimum pseudocodeword weights equal their minimum Hamming distances. Simulation studies show that these structured codes, when transmitted using matched signal sets over an additive-white-Gaussian-noise channel, can outperform their random counterparts of similar length and rate.

  6. Utilization of chemical abstracts service (CAS) data bases

    International Nuclear Information System (INIS)

    Ehrhardt, F.; Gesellschaft Deutscher Chemiker, Berlin

    1979-04-01

    A method is developed describing the economic utilization of the Chemical Abstracts Service data bases CA Condensates and Chemical Abstracts Subject Index Alertin order to supplement the data base of the IDC-Inorganica-Documentationsystem built up to meet the peculiarities of the inorganic chemistry. The method consists of EDP-Programs and processes at which special authority files for coded compound and subject entries play an important role. One of the advantages of the method is that the intellectual effort necessary to create such a data base is reduced to a minimum. The authority files may be also used for orther purposes. (orig.) 891 WB 892 MB [de

  7. Health physics research abstracts no. 11

    International Nuclear Information System (INIS)

    1984-07-01

    The present issue No. 11 of Health Physics Research Abstracts is the continuation of a series of Bulletins published by the Agency since 1967. They collect reports from Member States on Health Physics research in progress or just completed. The main aim in issuing such reports is to draw attention to work that is about to be published and to enable interested scientists to obtain further information through direct correspondence with the investigators. The attention of users of this publication is drawn to the fact that abstracts of published documents on Health Physics are published eventually in INIS Atomindex, which is one of the output products of the Agency's International Nuclear Information System. The present issue contains 235 reports received up to December 1983 from the following Member States. In parentheses the country's ISO code and number of reports are given

  8. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  9. ORTHOGRAPHIC INTERFERENCE and THE TEACHING OF BRITISH PRONUNCIATION TO TURKISH LEARNERS

    Directory of Open Access Journals (Sweden)

    Prof. Dr.Sinan Bayraktaroğlu

    2008-10-01

    Full Text Available This article is the report of an investigation of pronunciation difficulties of Turkish speakers/learners of English which are due to differences in the sound-letter representations in the orthographies of the two languages, namely called “ortographic interference”. These difficulties are different in nature than those arising from differences in the sound sysytems of Turkish and English.1While Turkish orthography is to a large extent phonemic, i.e. employing a one-to-one letter-sound correspondence (with few exceptions such as k - kâr- ɡ - yegane- gavur, etc., English orthography, on the other hand, represents 46 sounds of the spoken language with 102 single or group of letters in the written language.Such actual difficulties arising from the differences in the orthographic sound-letter represenatations of Turkish and English are classified, evaluated, and their sources are explained through a detailed phonetic analysis as applied to research methods of “contarstive analysis” and “error analysis”, which are effective approaches in the field of Applied Linguistics and Foreign Language Learning.For different categories of difficulties, corrective exercises are recommended for the teaching and learning of English pronunciation to Turkish students.

  10. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    International Nuclear Information System (INIS)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art

  11. Abstracts of digital computer code packages assembled by the Radiation Shielding Information Center

    Energy Technology Data Exchange (ETDEWEB)

    Carter, B.J.; Maskewitz, B.F.

    1985-04-01

    This publication, ORNL/RSIC-13, Volumes I to III Revised, has resulted from an internal audit of the first 168 packages of computing technology in the Computer Codes Collection (CCC) of the Radiation Shielding Information Center (RSIC). It replaces the earlier three documents published as single volumes between 1966 to 1972. A significant number of the early code packages were considered to be obsolete and were removed from the collection in the audit process and the CCC numbers were not reassigned. Others not currently being used by the nuclear R and D community were retained in the collection to preserve technology not replaced by newer methods, or were considered of potential value for reference purposes. Much of the early technology, however, has improved through developer/RSIC/user interaction and continues at the forefront of the advancing state-of-the-art.

  12. The Influence of Orthographic Neighborhood Density and Word Frequency on Visual Word Recognition: Insights from RT Distributional Analyses

    Directory of Open Access Journals (Sweden)

    Stephen Wee Hun eLim

    2016-03-01

    Full Text Available The effects of orthographic neighborhood density and word frequency in visual word recognition were investigated using distributional analyses of response latencies in visual lexical decision. Main effects of density and frequency were observed in mean latencies. Distributional analyses, in addition, revealed a density x frequency interaction: for low-frequency words, density effects were mediated predominantly by distributional shifting whereas for high-frequency words, density effects were absent except at the slower RTs, implicating distributional skewing. The present findings suggest that density effects in low-frequency words reflect processes involved in early lexical access, while the effects observed in high-frequency words reflect late postlexical checking processes.

  13. Mesh-based parallel code coupling interface

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, K.; Steckel, B. (eds.) [GMD - Forschungszentrum Informationstechnik GmbH, St. Augustin (DE). Inst. fuer Algorithmen und Wissenschaftliches Rechnen (SCAI)

    2001-04-01

    MpCCI (mesh-based parallel code coupling interface) is an interface for multidisciplinary simulations. It provides industrial end-users as well as commercial code-owners with the facility to combine different simulation tools in one environment. Thereby new solutions for multidisciplinary problems will be created. This opens new application dimensions for existent simulation tools. This Book of Abstracts gives a short overview about ongoing activities in industry and research - all presented at the 2{sup nd} MpCCI User Forum in February 2001 at GMD Sankt Augustin. (orig.) [German] MpCCI (mesh-based parallel code coupling interface) definiert eine Schnittstelle fuer multidisziplinaere Simulationsanwendungen. Sowohl industriellen Anwender als auch kommerziellen Softwarehersteller wird mit MpCCI die Moeglichkeit gegeben, Simulationswerkzeuge unterschiedlicher Disziplinen miteinander zu koppeln. Dadurch entstehen neue Loesungen fuer multidisziplinaere Problemstellungen und fuer etablierte Simulationswerkzeuge ergeben sich neue Anwendungsfelder. Dieses Book of Abstracts bietet einen Ueberblick ueber zur Zeit laufende Arbeiten in der Industrie und in der Forschung, praesentiert auf dem 2{sup nd} MpCCI User Forum im Februar 2001 an der GMD Sankt Augustin. (orig.)

  14. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    Science.gov (United States)

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  15. An introduction to abstract algebra

    CERN Document Server

    Robinson, Derek JS

    2003-01-01

    This is a high level introduction to abstract algebra which is aimed at readers whose interests lie in mathematics and in the information and physical sciences. In addition to introducing the main concepts of modern algebra, the book contains numerous applications, which are intended to illustrate the concepts and to convince the reader of the utility and relevance of algebra today. In particular applications to Polya coloring theory, latin squares, Steiner systems and error correcting codes are described. Another feature of the book is that group theory and ring theory are carried further than is often done at this level. There is ample material here for a two semester course in abstract algebra. The importance of proof is stressed and rigorous proofs of almost all results are given. But care has been taken to lead the reader through the proofs by gentle stages. There are nearly 400 problems, of varying degrees of difficulty, to test the reader''s skill and progress. The book should be suitable for students ...

  16. Single neurons in prefrontal cortex encode abstract rules.

    Science.gov (United States)

    Wallis, J D; Anderson, K C; Miller, E K

    2001-06-21

    The ability to abstract principles or rules from direct experience allows behaviour to extend beyond specific circumstances to general situations. For example, we learn the 'rules' for restaurant dining from specific experiences and can then apply them in new restaurants. The use of such rules is thought to depend on the prefrontal cortex (PFC) because its damage often results in difficulty in following rules. Here we explore its neural basis by recording from single neurons in the PFC of monkeys trained to use two abstract rules. They were required to indicate whether two successively presented pictures were the same or different depending on which rule was currently in effect. The monkeys performed this task with new pictures, thus showing that they had learned two general principles that could be applied to stimuli that they had not yet experienced. The most prevalent neuronal activity observed in the PFC reflected the coding of these abstract rules.

  17. Decoding LDPC Convolutional Codes on Markov Channels

    Directory of Open Access Journals (Sweden)

    Kashyap Manohar

    2008-01-01

    Full Text Available Abstract This paper describes a pipelined iterative technique for joint decoding and channel state estimation of LDPC convolutional codes over Markov channels. Example designs are presented for the Gilbert-Elliott discrete channel model. We also compare the performance and complexity of our algorithm against joint decoding and state estimation of conventional LDPC block codes. Complexity analysis reveals that our pipelined algorithm reduces the number of operations per time step compared to LDPC block codes, at the expense of increased memory and latency. This tradeoff is favorable for low-power applications.

  18. The characteristics of Chinese orthographic neighborhood size effect for developing readers.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available Orthographic neighborhood size (N size effect in Chinese character naming has been studied in adults. In the present study, we aimed to explore the developmental characteristics of Chinese N size effect. One hundred and seventeen students (40 from the 3(rd grade with mean age of 9 years; 40 from the 5(th grade with mean age of 11 years; 37 from the 7(th grade with mean age of 13 years were recruited in the study. A naming task of Chinese characters was adopted to elucidate N-size- effect development. Reaction times and error rates were recorded. Results showed that children in the 3(rd grade named characters from large neighborhoods faster than named those from small neighborhoods, revealing a facilitatory N size effect; the 5(th graders showed null N size effect; while the 7(th graders showed an inhibitory N size effect, with longer reaction times for the characters from large neighborhoods than for those from small neighborhoods. The change from facilitation to inhibition of neighborhood size effect across grades suggested the transition from broadly tuned to finely tuned lexical representation in reading development, and the possible inhibition from higher frequency neighbors for higher graders.

  19. Fire Technology Abstracts, volume 4, issue 1, August, 1981

    Science.gov (United States)

    Holtschlag, L. J.; Kuvshinoff, B. W.; Jernigan, J. B.

    This bibliography contains over 400 citations with abstracts addressing various aspects of fire technology. Subjects cover the dynamics of fire, behavior and properties of materials, fire modeling and test burns, fire protection, fire safety, fire service organization, apparatus and equipment, fire prevention, suppression, planning, human behavior, medical problems, codes and standards, hazard identification, safe handling of materials, insurance, economics of loss and prevention, and more.

  20. An ERP study of recognition memory for concrete and abstract pictures in school-aged children.

    Science.gov (United States)

    Boucher, Olivier; Chouinard-Leclaire, Christine; Muckle, Gina; Westerlund, Alissa; Burden, Matthew J; Jacobson, Sandra W; Jacobson, Joseph L

    2016-08-01

    Recognition memory for concrete, nameable pictures is typically faster and more accurate than for abstract pictures. A dual-coding account for these findings suggests that concrete pictures are processed into verbal and image codes, whereas abstract pictures are encoded in image codes only. Recognition memory relies on two successive and distinct processes, namely familiarity and recollection. Whether these two processes are similarly or differently affected by stimulus concreteness remains unknown. This study examined the effect of picture concreteness on visual recognition memory processes using event-related potentials (ERPs). In a sample of children involved in a longitudinal study, participants (N=96; mean age=11.3years) were assessed on a continuous visual recognition memory task in which half the pictures were easily nameable, everyday concrete objects, and the other half were three-dimensional abstract, sculpture-like objects. Behavioral performance and ERP correlates of familiarity and recollection (respectively, the FN400 and P600 repetition effects) were measured. Behavioral results indicated faster and more accurate identification of concrete pictures as "new" or "old" (i.e., previously displayed) compared to abstract pictures. ERPs were characterized by a larger repetition effect, on the P600 amplitude, for concrete than for abstract images, suggesting a graded recollection process dependent on the type of material to be recollected. Topographic differences were observed within the FN400 latency interval, especially over anterior-inferior electrodes, with the repetition effect more pronounced and localized over the left hemisphere for concrete stimuli, potentially reflecting different neural processes underlying early processing of verbal/semantic and visual material in memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. From Abstract Art to Abstracted Artists

    Directory of Open Access Journals (Sweden)

    Romi Mikulinsky

    2016-11-01

    Full Text Available What lineage connects early abstract films and machine-generated YouTube videos? Hans Richter’s famous piece Rhythmus 21 is considered to be the first abstract film in the experimental tradition. The Webdriver Torso YouTube channel is composed of hundreds of thousands of machine-generated test patterns designed to check frequency signals on YouTube. This article discusses geometric abstraction vis-à-vis new vision, conceptual art and algorithmic art. It argues that the Webdriver Torso is an artistic marvel indicative of a form we call mathematical abstraction, which is art performed by computers and, quite possibly, for computers.

  2. The Emotions of Abstract Words: A Distributional Semantic Analysis.

    Science.gov (United States)

    Lenci, Alessandro; Lebani, Gianluca E; Passaro, Lucia C

    2018-04-06

    Recent psycholinguistic and neuroscientific research has emphasized the crucial role of emotions for abstract words, which would be grounded by affective experience, instead of a sensorimotor one. The hypothesis of affective embodiment has been proposed as an alternative to the idea that abstract words are linguistically coded and that linguistic processing plays a key role in their acquisition and processing. In this paper, we use distributional semantic models to explore the complex interplay between linguistic and affective information in the representation of abstract words. Distributional analyses on Italian norming data show that abstract words have more affective content and tend to co-occur with contexts with higher emotive values, according to affective statistical indices estimated in terms of distributional similarity with a restricted number of seed words strongly associated with a set of basic emotions. Therefore, the strong affective content of abstract words might just be an indirect byproduct of co-occurrence statistics. This is consistent with a version of representational pluralism in which concepts that are fully embodied either at the sensorimotor or at the affective level live side-by-side with concepts only indirectly embodied via their linguistic associations with other embodied words. Copyright © 2018 Cognitive Science Society, Inc.

  3. Validity of vascular trauma codes at major trauma centres.

    Science.gov (United States)

    Altoijry, Abdulmajeed; Al-Omran, Mohammed; Lindsay, Thomas F; Johnston, K Wayne; Melo, Magda; Mamdani, Muhammad

    2013-12-01

    The use of administrative databases in vascular injury research has been increasing, but the validity of the diagnosis codes used in this research is uncertain. We assessed the positive predictive value (PPV) of International Classification of Diseases, tenth revision (ICD-10), vascular injury codes in administrative claims data in Ontario. We conducted a retrospective validation study using the Canadian Institute for Health Information Discharge Abstract Database, an administrative database that records all hospital admissions in Canada. We evaluated 380 randomly selected hospital discharge abstracts from the 2 main trauma centres in Toronto, Ont., St.Michael's Hospital and Sunnybrook Health Sciences Centre, between Apr. 1, 2002, and Mar. 31, 2010. We then compared these records with the corresponding patients' hospital charts to assess the level of agreement for procedure coding. We calculated the PPV and sensitivity to estimate the validity of vascular injury diagnosis coding. The overall PPV for vascular injury coding was estimated to be 95% (95% confidence interval [CI] 92.3-96.8). The PPV among code groups for neck, thorax, abdomen, upper extremity and lower extremity injuries ranged from 90.8 (95% CI 82.2-95.5) to 97.4 (95% CI 91.0-99.3), whereas sensitivity ranged from 90% (95% CI 81.5-94.8) to 98.7% (95% CI 92.9-99.8). Administrative claims hospital discharge data based on ICD-10 diagnosis codes have a high level of validity when identifying cases of vascular injury. Observational Study Level III.

  4. Waste management research abstracts no. 22. Information on radioactive waste programmes in progress

    International Nuclear Information System (INIS)

    1995-07-01

    The research abstracts contained in this issue have been collected during recent months and cover the period between January 1992 - February 1994 (through July 1994 for abstracts from the United States). The abstracts reflect research currently in progress in the field of radioactive waste management: environmental impacts, site selection, decontamination and decommissioning, environmental restoration and legal aspects of radioactive waste management. Though the information contained in this publication covers a wide range of programmes in many countries, the WMRA should not be interpreted as providing a complete survey of on-going research and IAEA Member States. For the first time, the abstracts published in document are only in English language. In addition, the abstracts received for this issue have been assigned INIS subject category codes and thesaurus terms to facilitate searches and also to fully utilize established sets of technical categories and terms

  5. Particle Tracking Model and Abstraction of Transport Processes

    Energy Technology Data Exchange (ETDEWEB)

    B. Robinson

    2004-10-21

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data.

  6. Particle Tracking Model and Abstraction of Transport Processes

    International Nuclear Information System (INIS)

    Robinson, B.

    2004-01-01

    The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data

  7. Costs and Benefits of Orthographic Inconsistency in Reading: Evidence from a Cross-Linguistic Comparison.

    Directory of Open Access Journals (Sweden)

    Chiara Valeria Marinelli

    Full Text Available We compared reading acquisition in English and Italian children up to late primary school analyzing RTs and errors as a function of various psycholinguistic variables and changes due to experience. Our results show that reading becomes progressively more reliant on larger processing units with age, but that this is modulated by consistency of the language. In English, an inconsistent orthography, reliance on larger units occurs earlier on and it is demonstrated by faster RTs, a stronger effect of lexical variables and lack of length effect (by fifth grade. However, not all English children are able to master this mode of processing yielding larger inter-individual variability. In Italian, a consistent orthography, reliance on larger units occurs later and it is less pronounced. This is demonstrated by larger length effects which remain significant even in older children and by larger effects of a global factor (related to speed of orthographic decoding explaining changes of performance across ages. Our results show the importance of considering not only overall performance, but inter-individual variability and variability between conditions when interpreting cross-linguistic differences.

  8. Orthographically sensitive treatment for dysprosody in children with childhood apraxia of speech using ReST intervention.

    Science.gov (United States)

    McCabe, Patricia; Macdonald-D'Silva, Anita G; van Rees, Lauren J; Ballard, Kirrie J; Arciuli, Joanne

    2014-04-01

    Impaired prosody is a core diagnostic feature of Childhood Apraxia of Speech (CAS) but there is limited evidence of effective prosodic intervention. This study reports the efficacy of the ReST intervention used in conjunction with bisyllabic pseudo word stimuli containing orthographic cues that are strongly associated with either strong-weak or weak-strong patterns of lexical stress. Using a single case AB design with one follow-up and replication, four children with CAS received treatment of four one-hour sessions per week for three weeks. Sessions contained 100 randomized trials of pseudo word treatment stimuli. Baseline measures were taken of treated and untreated behaviors; retention was measured at one day and four weeks post-treatment. Children's production of lexical stress improved from pre to post-treatment. Treatment effects and maintenance varied among participants. This study provides support for the treatment of prosodic deficits in CAS.

  9. Data exchange between zero dimensional code and physics platform in the CFETR integrated system code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Guoliang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Shi, Nan [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Zhou, Yifu; Mao, Shifeng [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Jian, Xiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Jiale [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Liu, Li; Chan, Vincent [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Ye, Minyou, E-mail: yemy@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China)

    2016-11-01

    Highlights: • The workflow of the zero dimensional code and the multi-dimension physics platform of CFETR integrated system codeis introduced. • The iteration process among the codes in the physics platform. • The data transfer between the zero dimensionalcode and the physical platform, including data iteration and validation, and justification for performance parameters.. - Abstract: The China Fusion Engineering Test Reactor (CFETR) integrated system code contains three parts: a zero dimensional code, a physics platform and an engineering platform. We use the zero dimensional code to identify a set of preliminary physics and engineering parameters for CFETR, which is used as input to initiate multi-dimension studies using the physics and engineering platform for design, verification and validation. Effective data exchange between the zero dimensional code and the physical platform is critical for the optimization of CFETR design. For example, in evaluating the impact of impurity radiation on core performance, an open field line code is used to calculate the impurity transport from the first-wall boundary to the pedestal. The impurity particle in the pedestal are used as boundary conditions in a transport code for calculating impurity transport in the core plasma and the impact of core radiation on core performance. Comparison of the results from the multi-dimensional study to those from the zero dimensional code is used to further refine the controlled radiation model. The data transfer between the zero dimensional code and the physical platform, including data iteration and validation, and justification for performance parameters will be presented in this paper.

  10. Code generation of RHIC accelerator device objects

    International Nuclear Information System (INIS)

    Olsen, R.H.; Hoff, L.; Clifford, T.

    1995-01-01

    A RHIC Accelerator Device Object is an abstraction which provides a software view of a collection of collider control points known as parameters. A grammar has been defined which allows these parameters, along with code describing methods for acquiring and modifying them, to be specified efficiently in compact definition files. These definition files are processed to produce C++ source code. This source code is compiled to produce an object file which can be loaded into a front end computer. Each loaded object serves as an Accelerator Device Object class definition. The collider will be controlled by applications which set and get the parameters in instances of these classes using a suite of interface routines. Significant features of the grammar are described with details about the generated C++ code

  11. The Interplay Between the Unfair Commercial Practices Directive and Codes of Conduct

    NARCIS (Netherlands)

    Charlotte Pavillon (C.M.D.S.)

    2013-01-01

    markdownabstract__Abstract__ At the heart of this paper lies the reciprocal influence between codes of conduct and the Unfair Commercial Practices Directive (UCPD). It assesses to what extent self-regulatory practice both affects and is affected by the directive. The codes' contribution to

  12. Effective coding with VHDL principles and best practice

    CERN Document Server

    Jasinski, Ricardo

    2016-01-01

    A guide to applying software design principles and coding practices to VHDL to improve the readability, maintainability, and quality of VHDL code. This book addresses an often-neglected aspect of the creation of VHDL designs. A VHDL description is also source code, and VHDL designers can use the best practices of software development to write high-quality code and to organize it in a design. This book presents this unique set of skills, teaching VHDL designers of all experience levels how to apply the best design principles and coding practices from the software world to the world of hardware. The concepts introduced here will help readers write code that is easier to understand and more likely to be correct, with improved readability, maintainability, and overall quality. After a brief review of VHDL, the book presents fundamental design principles for writing code, discussing such topics as design, quality, architecture, modularity, abstraction, and hierarchy. Building on these concepts, the book then int...

  13. Portable LQCD Monte Carlo code using OpenACC

    Science.gov (United States)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  14. Human Rights in Natural Science and Technology Professions’ Codes of Ethics?

    OpenAIRE

    Haugen, Hans Morten

    2013-01-01

    Abstract: No global professional codes for the natural science and technology professions exist. In light of how the application of new technology can affect individuals and communities, this discrepancy warrants greater scrutiny. This article analyzes the most relevant processes and seeks to explain why these processes have not resulted in global codes. Moreover, based on a human rights approach, the article gives recommendations on the future process and content of codes for ...

  15. Optimized reversible binary-coded decimal adders

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert

    2008-01-01

    Abstract Babu and Chowdhury [H.M.H. Babu, A.R. Chowdhury, Design of a compact reversible binary coded decimal adder circuit, Journal of Systems Architecture 52 (5) (2006) 272-282] recently proposed, in this journal, a reversible adder for binary-coded decimals. This paper corrects and optimizes...... their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit....... Keywords: Reversible logic circuit; Full-adder; Half-adder; Parallel adder; Binary-coded decimal; Application of reversible logic synthesis...

  16. Comparação dos erros ortográficos de alunos com desempenho inferior em escrita e alunos com desempenho médio nesta habilidade A comparison study of the orthographics mistakes of students with inferior and students with average writing performance

    Directory of Open Access Journals (Sweden)

    Patrícia Aparecida Zuanetti

    2008-01-01

    Full Text Available OBJETIVO: O objetivo deste estudo foi comparar se crianças com baixo desempenho em escrita cometem mais erros ortográficos que crianças da mesma série com desempenho satisfatório nesta tarefa, e quais os tipos de erros ortográficos mais freqüentes. MÉTODOS: Participaram deste estudo 24 crianças da 2ª série do ensino fundamental de uma escola pública, avaliadas individualmente. O teste aplicado foi o subteste de escrita do Teste de Desempenho Escolar, composto por 34 palavras que são ditadas aos alunos. RESULTADOS: Os alunos com desempenho inferior em escrita cometeram significativamente mais erros ortográficos que o grupo com desempenho satisfatório. Os erros que tiveram diferença estatisticamente significativa entre os dois grupos foram erros do tipo hipercorreção, dificuldade com marcadores de nasalização, relação fonografêmica irregular, omissões de sílabas e erros por troca de letras. Também houve correlação fortemente negativa entre erros ortográficos e desempenho em escrita. CONCLUSÕES: Quanto melhor o desempenho em escrita, menos erros ortográficos possui a elaboração gráfica do aluno. Os erros mais freqüentes no grupo com desempenho baixo, que os difere do outro grupo, dizem respeito aos erros de relação fonografêmica irregular, omissões de sílabas, dificuldade no uso de marcadores de nasalização, hipercorreção e erros por troca de letras. Com o avanço da capacidade de aprendizagem da criança, o desempenho ortográfico tende a melhorar.PURPOSE: The aim of this study was to verify whether children with poor writing performances make more orthographic mistakes than children of the same school grade with average performances, and what are the most frequent types of orthographic mistakes. METHODS: Twenty-four second grade children from a public school were individually analyzed in this study. The test used was the writing subtest of the School Performance Test, which is composed by 34 words that

  17. Computer codes for problems of isotope and radiation research

    International Nuclear Information System (INIS)

    Remer, M.

    1986-12-01

    A survey is given of computer codes for problems in isotope and radiation research. Altogether 44 codes are described as titles with abstracts. 17 of them are in the INIS scope and are processed individually. The subjects are indicated in the chapter headings: 1) analysis of tracer experiments, 2) spectrum calculations, 3) calculations of ion and electron trajectories, 4) evaluation of gamma irradiation plants, and 5) general software

  18. The 419 codes as business unusual: the advance fee fraud online ...

    African Journals Online (AJOL)

    The 419 codes as business unusual: the advance fee fraud online discourse. A Adogame. Abstract. No Abstract. International Journal of Humanistic Studies Vol. 5 2006: pp. 54-72. AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners ...

  19. Learning by Doing: Teaching Decision Making through Building a Code of Ethics.

    Science.gov (United States)

    Hawthorne, Mark D.

    2001-01-01

    Notes that applying abstract ethical principles to the practical business of building a code of applied ethics for a technical communication department teaches students that they share certain unarticulated or unconscious values that they can translate into ethical principles. Suggests that combining abstract theory with practical policy writing…

  20. TrueGrid: Code the table, tabulate the data

    NARCIS (Netherlands)

    F. Hermans (Felienne); T. van der Storm (Tijs)

    2016-01-01

    textabstractSpreadsheet systems are live programming environments. Both the data and the code are right in front you, and if you edit either of them, the effects are immediately visible. Unfortunately, spreadsheets lack mechanisms for abstraction, such as classes, function definitions etc.

  1. Writing to dictation and handwriting performance among Chinese children with dyslexia: relationships with orthographic knowledge and perceptual-motor skills.

    Science.gov (United States)

    Cheng-Lai, Alice; Li-Tsang, Cecilia W P; Chan, Alan H L; Lo, Amy G W

    2013-10-01

    The purpose of this study was to investigate the relationships between writing to dictation, handwriting, orthographic, and perceptual-motor skills among Chinese children with dyslexia. A cross-sectional design was used. A total of 45 third graders with dyslexia were assessed. Results of stepwise multiple regression models showed that Chinese character naming was the only predictor associated with word dictation (β=.32); handwriting speed was related to deficits in rapid automatic naming (β=-.36) and saccadic efficiency (β=-.29), and visual-motor integration predicted both of the number of characters exceeded grid (β=-.41) and variability of character size (β=-.38). The findings provided support to a multi-stage working memory model of writing for explaining the possible underlying mechanism of writing to dictation and handwriting difficulties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Análise de erros ortográficos em diferentes problemas de aprendizagem Analyzing typical orthographic mistakes related to different learning problems

    Directory of Open Access Journals (Sweden)

    Jaime Luiz Zorzi

    2009-09-01

    Full Text Available OBJETIVOS: descrever achados ortográficos em problemas de aprendizagem, verificar se os erros produzidos são aqueles encontrados na aprendizagem normal e analisar se predominam problemas de natureza ortográfica ou fonológica. MÉTODOS: examinou-se a escrita de 64 sujeitos avaliados pelo Laboratório de Distúrbios de Aprendizagem do Departamento de Neurologia da UNICAMP, diagnosticados com algum problema de aprendizagem: Transtorno do Déficit de Atenção/Hiperatividade (28; Dificuldades de Aprendizagem (13; Distúrbio de Aprendizagem (7; Dislexia (3; Distúrbios Associados (5 e Diagnóstico inconclusivo (9. As idades variaram entre 8;2 e 13;4 anos, média 10;6 anos. Foram incluídos sujeitos alfabetizados, sem rebaixamento intelectual. Os erros foram classificados em 11 categorias e quantificados para análise estatística. RESULTADOS: os erros correspondem àqueles observados em crianças sem queixa de aprendizagem. Os erros por Representações Múltiplas, Omissão de Letras e Oralidade, são respectivamente, os três tipos mais frequentes nos casos de Transtorno do Déficit de Atenção e Hiperatividade, Dificuldades Escolares, Distúrbios Associados e Diagnóstico Desconhecido. No Distúrbio de Aprendizagem a sequência é de Representações Múltiplas, Omissão, Outras Alterações e Surdas-sonoras. Na dislexia observa-se a sequência de Representações Múltiplas, Oralidade, Omissão e Outras Alterações. Existe uma tendência geral de predomínio das alterações ortográficas, embora sem diferença significante em relação aos erros de natureza fonológica. CONCLUSÃO: os erros de natureza ortográfica são os mais frequentes em relação aos de natureza fonológica. Com tendência contrária, os erros visuo-espaciais têm baixa ocorrência em geral, o que mostra que a dificuldade de todos os grupos é fundamentalmente de origem linguística e não perceptual.PURPOSE: to describe the orthographic findings in several types of

  3. Waste management research abstracts. Information on radioactive waste management research in progress or planned. Vol. 30

    International Nuclear Information System (INIS)

    2005-11-01

    This issue contains 90 abstracts that describe research in progress in the field of radioactive waste management. The abstracts present ongoing work in various countries and international organizations. Although the abstracts are indexed by country, some programmes are actually the result of co-operation among several countries. Indeed, a primary reason for providing this compilation of programmes, institutions and scientists engaged in research into radioactive waste management is to increase international co-operation and facilitate communications. Data provided by researchers for publication in WMRA 30 were entered into a research in progress database named IRAIS (International Research Abstracts Information System). The IRAIS database is available via the Internet at the following URL: http://www.iaea.org/programmes/irais/ This database will continue to be updated as new abstracts are submitted by researchers world-wide. The abstracts are listed by country (full name) in alphabetical order. All abstracts are in English. The volume includes six indexes: principal investigator, title, performing organization, descriptors (key words), topic codes and country

  4. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  5. Orthographe & grammaire à l’université. Quels besoins ? Quelles démarches pédagogiques ?

    Directory of Open Access Journals (Sweden)

    Françoise Boch

    2012-07-01

    Full Text Available L’étude prend appui sur une analyse statistique de 82 textes produits par des étudiants entrants dans la filière sciences du langage à l’université en septembre 2010. Cette typologie des erreurs (inspirée de  MANESSE ; COGIS, 2007 permet d’identifier les principaux besoins de ce public dans le champ de la langue écrite : morphologie verbale et usage des temps verbaux, accords, ponctuation, orthographe lexicale (double consonnes, accents, formation des adverbes. Cette étude a jeté les bases d’une formation spécifique destinée aux étudiants de Licence première et deuxième année désireux d’améliorer leurs performances en langue. La démarche pédagogique préconisée pour cette formation prend centralement en compte les spécificités du public jeune adulte, caractérisé en particulier par un déficit de confiance en soi lié à des difficultés en langue inscrites dans la durée. Afin de faire évoluer les représentations souvent inhibantes vis-à-vis de leurs propres capacités à progresser à l’écrit, la démarche adoptée cherche à placer les étudiants dans une posture réflexive (LAURENT, 2004, 2009 rompant ainsi avec une attitude passive consistant à recevoir et à (plus ou moins bien utiliser une norme trop peu questionnée (cf. MILLET et al., 1990, groupe RO, 2011. L’enjeu d’une telle formation nous semble résider essentiellement dans ce changement de posture, les contenus proposés (du niveau de l’école primaire et du collège ne présentant aucune difficulté intrinsèque. Dans cette perspective, nous adoptons une démarche résolument inductive, dont la caractéristique principale est de placer les apprenants en situation de chercheur (ici, linguiste, cf. BARTH, 2001, condition sine qua non pour qu’ils prennent réellement et activement en charge leurs apprentissages. Notre contribution présentera dans un premier temps les résultats de l’enquête puis développera la r

  6. The neural representation of abstract words: the role of emotion.

    Science.gov (United States)

    Vigliocco, Gabriella; Kousta, Stavroula-Thaleia; Della Rosa, Pasquale Anthony; Vinson, David P; Tettamanti, Marco; Devlin, Joseph T; Cappa, Stefano F

    2014-07-01

    It is generally assumed that abstract concepts are linguistically coded, in line with imaging evidence of greater engagement of the left perisylvian language network for abstract than concrete words (Binder JR, Desai RH, Graves WW, Conant LL. 2009. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex. 19:2767-2796; Wang J, Conder JA, Blitzer DN, Shinkareva SV. 2010. Neural representation of abstract and concrete concepts: A meta-analysis of neuroimaging studies. Hum Brain Map. 31:1459-1468). Recent behavioral work, which used tighter matching of items than previous studies, however, suggests that abstract concepts also entail affective processing to a greater extent than concrete concepts (Kousta S-T, Vigliocco G, Vinson DP, Andrews M, Del Campo E. The representation of abstract words: Why emotion matters. J Exp Psychol Gen. 140:14-34). Here we report a functional magnetic resonance imaging experiment that shows greater engagement of the rostral anterior cingulate cortex, an area associated with emotion processing (e.g., Etkin A, Egner T, Peraza DM, Kandel ER, Hirsch J. 2006. Resolving emotional conflict: A role for the rostral anterior cingulate cortex in modulating activity in the amygdala. Neuron. 52:871), in abstract processing. For abstract words, activation in this area was modulated by the hedonic valence (degree of positive or negative affective association) of our items. A correlation analysis of more than 1,400 English words further showed that abstract words, in general, receive higher ratings for affective associations (both valence and arousal) than concrete words, supporting the view that engagement of emotional processing is generally required for processing abstract words. We argue that these results support embodiment views of semantic representation, according to which, whereas concrete concepts are grounded in our sensory-motor experience, affective experience is crucial in the

  7. The OpenMC Monte Carlo particle transport code

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit

    2013-01-01

    Highlights: ► An open source Monte Carlo particle transport code, OpenMC, has been developed. ► Solid geometry and continuous-energy physics allow high-fidelity simulations. ► Development has focused on high performance and modern I/O techniques. ► OpenMC is capable of scaling up to hundreds of thousands of processors. ► Results on a variety of benchmark problems agree with MCNP5. -- Abstract: A new Monte Carlo code called OpenMC is currently under development at the Massachusetts Institute of Technology as a tool for simulation on high-performance computing platforms. Given that many legacy codes do not scale well on existing and future parallel computer architectures, OpenMC has been developed from scratch with a focus on high performance scalable algorithms as well as modern software design practices. The present work describes the methods used in the OpenMC code and demonstrates the performance and accuracy of the code on a variety of problems.

  8. A learning perspective on individual differences in skilled reading: Exploring and exploiting orthographic and semantic discrimination cues.

    Science.gov (United States)

    Milin, Petar; Divjak, Dagmar; Baayen, R Harald

    2017-11-01

    The goal of the present study is to understand the role orthographic and semantic information play in the behavior of skilled readers. Reading latencies from a self-paced sentence reading experiment in which Russian near-synonymous verbs were manipulated appear well-predicted by a combination of bottom-up sublexical letter triplets (trigraphs) and top-down semantic generalizations, modeled using the Naive Discrimination Learner. The results reveal a complex interplay of bottom-up and top-down support from orthography and semantics to the target verbs, whereby activations from orthography only are modulated by individual differences. Using performance on a serial reaction time (SRT) task for a novel operationalization of the mental speed hypothesis, we explain the observed individual differences in reading behavior in terms of the exploration/exploitation hypothesis from reinforcement learning, where initially slower and more variable behavior leads to better performance overall. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  10. An approach to improving the structure of error-handling code in the linux kernel

    DEFF Research Database (Denmark)

    Saha, Suman; Lawall, Julia; Muller, Gilles

    2011-01-01

    The C language does not provide any abstractions for exception handling or other forms of error handling, leaving programmers to devise their own conventions for detecting and handling errors. The Linux coding style guidelines suggest placing error handling code at the end of each function, where...... an automatic program transformation that transforms error-handling code into this style. We have applied our transformation to the Linux 2.6.34 kernel source code, on which it reorganizes the error handling code of over 1800 functions, in about 25 minutes....

  11. Grounding abstractness: Abstract concepts and the activation of the mouth

    Directory of Open Access Journals (Sweden)

    Anna M Borghi

    2016-10-01

    Full Text Available One key issue for theories of cognition is how abstract concepts, such as freedom, are represented. According to the WAT (Words As social Tools proposal, abstract concepts activate both sensorimotor and linguistic/social information, and their acquisition modality involves the linguistic experience more than the acquisition of concrete concepts. We report an experiment in which participants were presented with abstract and concrete definitions followed by concrete and abstract target-words. When the definition and the word matched, participants were required to press a key, either with the hand or with the mouth. Response times and accuracy were recorded. As predicted, we found that abstract definitions and abstract words yielded slower responses and more errors compared to concrete definitions and concrete words. More crucially, there was an interaction between the target-words and the effector used to respond (hand, mouth. While responses with the mouth were overall slower, the advantage of the hand over the mouth responses was more marked with concrete than with abstract concepts. The results are in keeping with grounded and embodied theories of cognition and support the WAT proposal, according to which abstract concepts evoke linguistic-social information, hence activate the mouth. The mechanisms underlying the mouth activation with abstract concepts (re-enactment of acquisition experience, or re-explanation of the word meaning, possibly through inner talk are discussed. To our knowledge this is the first behavioral study demonstrating with real words that the advantage of the hand over the mouth is more marked with concrete than with abstract concepts, likely because of the activation of linguistic information with abstract concepts.

  12. Multirate Filter Bank Representations of RS and BCH Codes

    Directory of Open Access Journals (Sweden)

    Van Meerbergen Geert

    2008-01-01

    Full Text Available Abstract This paper addresses the use of multirate filter banks in the context of error-correction coding. An in-depth study of these filter banks is presented, motivated by earlier results and applications based on the filter bank representation of Reed-Solomon (RS codes, such as Soft-In Soft-Out RS-decoding or RS-OFDM. The specific structure of the filter banks (critical subsampling is an important aspect in these applications. The goal of the paper is twofold. First, the filter bank representation of RS codes is now explained based on polynomial descriptions. This approach allows us to gain new insight in the correspondence between RS codes and filter banks. More specifically, it allows us to show that the inherent periodically time-varying character of a critically subsampled filter bank matches remarkably well with the cyclic properties of RS codes. Secondly, an extension of these techniques toward the more general class of BCH codes is presented. It is demonstrated that a BCH code can be decomposed into a sum of critically subsampled filter banks.

  13. Compendium of computer codes for the safety analysis of LMFBR's

    International Nuclear Information System (INIS)

    1975-06-01

    A high level of mathematical sophistication is required in the safety analysis of LMFBR's to adequately meet the demands for realism and confidence in all areas of accident consequence evaluation. The numerical solution procedures associated with these analyses are generally so complex and time consuming as to necessitate their programming into computer codes. These computer codes have become extremely powerful tools for safety analysis, combining unique advantages in accuracy, speed and cost. The number, diversity and complexity of LMFBR safety codes in the U. S. has grown rapidly in recent years. It is estimated that over 100 such codes exist in various stages of development throughout the country. It is inevitable that such a large assortment of codes will require rigorous cataloguing and abstracting to aid individuals in identifying what is available. It is the purpose of this compendium to provide such a service through the compilation of code summaries which describe and clarify the status of domestic LMFBR safety codes. (U.S.)

  14. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  15. Quando alunos surdos escolhem palavras escritas para nomear figuras: paralexias ortográficas, semânticas e quirêmicas Picture naming by the deaf: cheremic, semantic and orthographic processes involved

    Directory of Open Access Journals (Sweden)

    Fernando César Capovilla

    2006-08-01

    Full Text Available O Teste de Nomeação de Figuras por Escolha (TNF2.1-Escolha avalia a habilidade de escolher palavras escritas para nomear figuras, e analisa processos quirêmicos, ortográficos e semânticos envolvidos. Foi aplicado a 313 surdos de 6-34 anos, de 1ª. série do Ensino Fundamental a 1ª. do Médio de quatro escolas bilíngües paulistas (dos quais 77% com perda congênita, e 49%, congênita-profunda, junto com TNF1.1-Escolha, e testes de vocabulário receptivo de sinais (TVRSL, competência de leitura de palavras (TCLPP, compreensão de leitura de sentenças (TCLS, nomeação de figuras por escrita (TNF-Escrita, e de sinais por escolha e escrita (TNS-Escolha e TNS-Escrita. Foi gerada tabela normativa de nomeação por série escolar. O TNF2.1-Escolha manteve as seguintes inter-relações positivas significativas: correlação muito alta (r = 0,89 com TNF1.1-Escolha; alta (r = 0,77-0,80 com escrita do nome de figuras (TNF-Escrita e leitura de sentenças (TCLS, média (r = 0,62-0,68 com nomeação de sinais por escolha e escrita (TNS-Escolha, TNS-Escrita e competência de leitura (TCLPP; e baixa (r = 0,36 com vocabulário de sinais (TVRSL. De 1.507 paralexias, houve 583 ortográficas, 546 semânticas e 378 quirêmicas. Estas revelam que, ao escolher palavras para nomear figuras, surdos primeiro evocam o sinal da figura e, depois, a palavra do sinal, corroborando a hipótese de que o léxico quirêmico indexa o ortográfico ao pictorial. Corroborando a validade do TNF2.1-Escolha em induzir paralexias, quanto maior a competência de leitura no TCLPP, menos paralexias ortográficas no TNF-Escolha, e quanto maior o vocabulário de sinais no TVRSL, menos paralexias quirêmicas no TNF2.1-Escolha.The Picture-Print Matching Test (PPMT2.1 assesses the ability of naming pictures by choosing from among written words, and analyzes the cheremic, semantic and orthographic processes involved. The participants were 313 1st-9th grade deaf students, aged 6-34 years

  16. Orthographic units in the absence of visual processing: Evidence from sublexical structure in braille.

    Science.gov (United States)

    Fischer-Baum, Simon; Englebretson, Robert

    2016-08-01

    Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. 48. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie. Joint annual meeting of the DGNR and OeGNR. Abstracts; 48. Jahrestagung der Deutschen Gesellschaft fuer Neuroradiologie. Gemeinsame Jahrestagung der DGNR und OeGNR. Abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-09-15

    The conference proceedings of the 48. Annual meeting of the Deutsche Gesellschaft fuer Neuroradiologie contain abstracts on the following issues: neuro-oncological imaging, multimodal imaging concepts, subcranial imaging, spinal codes, interventional neuroradiology, innovative techniques like high-field MRT and hybrid imaging methods, inflammable and metabolic central nervous system diseases and epilepsy.

  18. Assessment of systems codes and their coupling with CFD codes in thermal–hydraulic applications to innovative reactors

    Energy Technology Data Exchange (ETDEWEB)

    Bandini, G., E-mail: giacomino.bandini@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) (Italy); Polidori, M. [Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) (Italy); Gerschenfeld, A.; Pialla, D.; Li, S. [Commissariat à l’Energie Atomique (CEA) (France); Ma, W.M.; Kudinov, P.; Jeltsov, M.; Kööp, K. [Royal Institute of Technology (KTH) (Sweden); Huber, K.; Cheng, X.; Bruzzese, C.; Class, A.G.; Prill, D.P. [Karlsruhe Institute of Technology (KIT) (Germany); Papukchiev, A. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) (Germany); Geffray, C.; Macian-Juan, R. [Technische Universität München (TUM) (Germany); Maas, L. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN) (France)

    2015-01-15

    Highlights: • The assessment of RELAP5, TRACE and CATHARE system codes on integral experiments is presented. • Code benchmark of CATHARE, DYN2B, and ATHLET on PHENIX natural circulation experiment. • Grid-free pool modelling based on proper orthogonal decomposition for system codes is explained. • The code coupling methodologies are explained. • The coupling of several CFD/system codes is tested against integral experiments. - Abstract: The THINS project of the 7th Framework EU Program on nuclear fission safety is devoted to the investigation of crosscutting thermal–hydraulic issues for innovative nuclear systems. A significant effort in the project has been dedicated to the qualification and validation of system codes currently employed in thermal–hydraulic transient analysis for nuclear reactors. This assessment is based either on already available experimental data, or on the data provided by test campaigns carried out in the frame of THINS project activities. Data provided by TALL and CIRCE facilities were used in the assessment of system codes for HLM reactors, while the PHENIX ultimate natural circulation test was used as reference for a benchmark exercise among system codes for sodium-cooled reactor applications. In addition, a promising grid-free pool model based on proper orthogonal decomposition is proposed to overcome the limits shown by the thermal–hydraulic system codes in the simulation of pool-type systems. Furthermore, multi-scale system-CFD solutions have been developed and validated for innovative nuclear system applications. For this purpose, data from the PHENIX experiments have been used, and data are provided by the tests conducted with new configuration of the TALL-3D facility, which accommodates a 3D test section within the primary circuit. The TALL-3D measurements are currently used for the validation of the coupling between system and CFD codes.

  19. Development of 2D particle-in-cell code to simulate high current, low ...

    Indian Academy of Sciences (India)

    Abstract. A code for 2D space-charge dominated beam dynamics study in beam trans- port lines is developed. The code is used for particle-in-cell (PIC) simulation of z-uniform beam in a channel containing solenoids and drift space. It can also simulate a transport line where quadrupoles are used for focusing the beam.

  20. The Components of Abstracts: the Logical Structure of Abstractsin the Area of Technical Sciences

    Directory of Open Access Journals (Sweden)

    Nina Jamar

    2014-04-01

    Full Text Available ABSTRACTPurpose: The main purpose of this research was to find out what kind of structure would be the most appropriate for abstracts in the area of technical sciences, and on the basis of these findings develop guidelines for their writing.Methodology/approach: First, the components of abstracts published in journals were analyzed. Then the prototypes and recommended improved abstracts were presented. Third, the satisfaction of the readers with the different forms of abstracts was examined. According to the results of these three parts of the research, the guidelines for writing abstracts in the area of technical sciences were developed.Results: The results showed that it is possible to determine the optimum structure for abstracts from the area of technical sciences. This structure should follow the known IMRD format or BMRC structure according to the coding scheme.Research limitations: The presented research included in the analysis only abstracts from several areas that represent technical studies. In order to develop the guidelines for writing abstracts more broadly, the research should be extended with at least one more area from the natural sciences and two areas from social sciences and humanities.Original/practical implications: It is important to emphasize that even if the guidelines for writing abstracts by the individual journal exist, authors do not always take them into account. Therefore, it is important that the abstracts that are actually published in journals were analysed. It is also important that with the development of guidelines for writing abstracts the opinion of researchers was also taken into account.

  1. Grounded understanding of abstract concepts: The case of STEM learning.

    Science.gov (United States)

    Hayes, Justin C; Kraemer, David J M

    2017-01-01

    Characterizing the neural implementation of abstract conceptual representations has long been a contentious topic in cognitive science. At the heart of the debate is whether the "sensorimotor" machinery of the brain plays a central role in representing concepts, or whether the involvement of these perceptual and motor regions is merely peripheral or epiphenomenal. The domain of science, technology, engineering, and mathematics (STEM) learning provides an important proving ground for sensorimotor (or grounded) theories of cognition, as concepts in science and engineering courses are often taught through laboratory-based and other hands-on methodologies. In this review of the literature, we examine evidence suggesting that sensorimotor processes strengthen learning associated with the abstract concepts central to STEM pedagogy. After considering how contemporary theories have defined abstraction in the context of semantic knowledge, we propose our own explanation for how body-centered information, as computed in sensorimotor brain regions and visuomotor association cortex, can form a useful foundation upon which to build an understanding of abstract scientific concepts, such as mechanical force. Drawing from theories in cognitive neuroscience, we then explore models elucidating the neural mechanisms involved in grounding intangible concepts, including Hebbian learning, predictive coding, and neuronal recycling. Empirical data on STEM learning through hands-on instruction are considered in light of these neural models. We conclude the review by proposing three distinct ways in which the field of cognitive neuroscience can contribute to STEM learning by bolstering our understanding of how the brain instantiates abstract concepts in an embodied fashion.

  2. INF Code related matters. Joint IAEA/IMO literature survey on potential consequences of severe maritime accidents involving the transport of radioactive material. 2 volumes. Vol. I - Report and publication titles. Vol. II - Relevant abstracts

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-10

    This literature survey was undertaken jointly by the International Maritime Organization (IMO) and the International Atomic Energy Agency (IAEA) as a step in addressing the subject of environmental impact of accidents involving materials subject to the IMO's Code for the Safe Carriage of Irradiated Nuclear Fuel, Plutonium and High-Level Radioactive Wastes in Flasks on Board Ships, also known as the INF Code. The results of the survey are provided in two volumes: the first one containing the description of the search and search results with the list of generated publication titles, and the second volume containing the abstracts of those publications deemed relevant for the purposes of the literature survey. Literature published between 1980 and mid-1999 was reviewed by two independent consultants who generated publication titles by performing searches of appropriate databases, and selected the abstracts of relevant publications for inclusion in this survey. The IAEA operates INIS, the world's leading computerised bibliographical information system on the peaceful uses of nuclear energy. The acronym INIS stands for International Nuclear Information System. INIS Members are responsible for determining the relevant nuclear literature produced within their borders or organizational confines, and then preparing the associated input in accordance with INIS rules. INIS records are included in other major databases such as the Energy, Science and Technology database of the DIALOG service. Because it is the INIS Members, rather than the IAEA Secretariat, who are responsible for its contents, it was considered appropriate that INIS be the primary source of information for this literature review. Selected unpublished reports were also reviewed, e.g. Draft Proceedings of the Special Consultative Meeting of Entities involved in the maritime transport of materials covered by the INF Code (SCM 5), March 1996. Many of the formal papers at SCM 5 were included in the literature

  3. INF Code related matters. Joint IAEA/IMO literature survey on potential consequences of severe maritime accidents involving the transport of radioactive material. 2 volumes. Vol. I - Report and publication titles. Vol. II - Relevant abstracts

    International Nuclear Information System (INIS)

    2000-01-01

    This literature survey was undertaken jointly by the International Maritime Organization (IMO) and the International Atomic Energy Agency (IAEA) as a step in addressing the subject of environmental impact of accidents involving materials subject to the IMO's Code for the Safe Carriage of Irradiated Nuclear Fuel, Plutonium and High-Level Radioactive Wastes in Flasks on Board Ships, also known as the INF Code. The results of the survey are provided in two volumes: the first one containing the description of the search and search results with the list of generated publication titles, and the second volume containing the abstracts of those publications deemed relevant for the purposes of the literature survey. Literature published between 1980 and mid-1999 was reviewed by two independent consultants who generated publication titles by performing searches of appropriate databases, and selected the abstracts of relevant publications for inclusion in this survey. The IAEA operates INIS, the world's leading computerised bibliographical information system on the peaceful uses of nuclear energy. The acronym INIS stands for International Nuclear Information System. INIS Members are responsible for determining the relevant nuclear literature produced within their borders or organizational confines, and then preparing the associated input in accordance with INIS rules. INIS records are included in other major databases such as the Energy, Science and Technology database of the DIALOG service. Because it is the INIS Members, rather than the IAEA Secretariat, who are responsible for its contents, it was considered appropriate that INIS be the primary source of information for this literature review. Selected unpublished reports were also reviewed, e.g. Draft Proceedings of the Special Consultative Meeting of Entities involved in the maritime transport of materials covered by the INF Code (SCM 5), March 1996. Many of the formal papers at SCM 5 were included in the literature

  4. How Specific are Specific Comprehension Difficulties? An Investigation of Poor Reading Comprehension in Nine-Year-Olds

    DEFF Research Database (Denmark)

    Rønberg, Louise; Petersen, Dorthe Klint

    2015-01-01

    comprehenders, the poor comprehenders’ orthographic coding and daily reading of literary texts were significantly below those of average readers. This study indicates that a lack of reading experience, and likewise, a lack of fluent word reading, may be important factors in understanding 9-year-old poor...

  5. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  6. Effects of syntactic structure in the memory of concrete and abstract Chinese sentences.

    Science.gov (United States)

    Ho, C S; Chen, H C

    1993-09-01

    Smith (1981) found that concrete English sentences were better recognized than abstract sentences and that this concreteness effect was potent only when the concrete sentence was also affirmative but the effect switched to an opposite end when the concrete sentence was negative. These results were partially replicated in Experiment 1 by using materials from a very different language (i.e., Chinese): concrete-affirmative sentences were better remembered than concrete-negative and abstract sentences, but no reliable difference was found between the latter two types. In Experiment 2, the task was modified by using a visual presentation instead of an oral one as in Experiment 1. Both concrete-affirmative and concrete-negative sentences were better memorized then abstract ones in Experiment 2. The findings in the two experiments are explained by a combination of the dual-coding model and Marschark's (1985) item-specific and relational processing. The differential effects of experience with different language systems on processing verbal materials in memory are also discussed.

  7. Is orthographic information from multiple parafoveal words processed in parallel: An eye-tracking study.

    Science.gov (United States)

    Cutter, Michael G; Drieghe, Denis; Liversedge, Simon P

    2017-08-01

    In the current study we investigated whether orthographic information available from 1 upcoming parafoveal word influences the processing of another parafoveal word. Across 2 experiments we used the boundary paradigm (Rayner, 1975) to present participants with an identity preview of the 2 words after the boundary (e.g., hot pan ), a preview in which 2 letters were transposed between these words (e.g., hop tan ), or a preview in which the same 2 letters were substituted (e.g., hob fan ). We hypothesized that if these 2 words were processed in parallel in the parafovea then we may observe significant preview benefits for the condition in which the letters were transposed between words relative to the condition in which the letters were substituted. However, no such effect was observed, with participants fixating the words for the same amount of time in both conditions. This was the case both when the transposition was made between the final and first letter of the 2 words (e.g., hop tan as a preview of hot pan ; Experiment 1) and when the transposition maintained within word letter position (e.g., pit hop as a preview of hit pop ; Experiment 2). The implications of these findings are considered in relation to serial and parallel lexical processing during reading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. The influence of orthographic experience on the development of phonological preparation in spoken word production.

    Science.gov (United States)

    Li, Chuchu; Wang, Min

    2017-08-01

    Three sets of experiments using the picture naming tasks with the form preparation paradigm investigated the influence of orthographic experience on the development of phonological preparation unit in spoken word production in native Mandarin-speaking children. Participants included kindergarten children who have not received formal literacy instruction, Grade 1 children who are comparatively more exposed to the alphabetic pinyin system and have very limited Chinese character knowledge, Grades 2 and 4 children who have better character knowledge and more exposure to characters, and skilled adult readers who have the most advanced character knowledge and most exposure to characters. Only Grade 1 children showed the form preparation effect in the same initial consonant condition (i.e., when a list of target words shared the initial consonant). Both Grade 4 children and adults showed the preparation effect when the initial syllable (but not tone) among target words was shared. Kindergartners and Grade 2 children only showed the preparation effect when the initial syllable including tonal information was shared. These developmental changes in phonological preparation could be interpreted as a joint function of the modification of phonological representation and attentional shift. Extensive pinyin experience encourages speakers to attend to and select onset phoneme in phonological preparation, whereas extensive character experience encourages speakers to prepare spoken words in syllables.

  9. Computer-modeling codes to improve exploration nuclear-logging methods. National Uranium Resource Evaluation

    International Nuclear Information System (INIS)

    Wilson, R.D.; Price, R.K.; Kosanke, K.L.

    1983-03-01

    As part of the Department of Energy's National Uranium Resource Evaluation (NURE) project's Technology Development effort, a number of computer codes and accompanying data bases were assembled for use in modeling responses of nuclear borehole logging Sondes. The logging methods include fission neutron, active and passive gamma-ray, and gamma-gamma. These CDC-compatible computer codes and data bases are available on magnetic tape from the DOE Technical Library at its Grand Junction Area Office. Some of the computer codes are standard radiation-transport programs that have been available to the radiation shielding community for several years. Other codes were specifically written to model the response of borehole radiation detectors or are specialized borehole modeling versions of existing Monte Carlo transport programs. Results from several radiation modeling studies are available as two large data bases (neutron and gamma-ray). These data bases are accompanied by appropriate processing programs that permit the user to model a wide range of borehole and formation-parameter combinations for fission-neutron, neutron-, activation and gamma-gamma logs. The first part of this report consists of a brief abstract for each code or data base. The abstract gives the code name and title, short description, auxiliary requirements, typical running time (CDC 6600), and a list of references. The next section gives format specifications and/or directory for the tapes. The final section of the report presents listings for programs used to convert data bases between machine floating-point and EBCDIC

  10. Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently.

    Science.gov (United States)

    Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin

    2015-02-01

    The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Implementation of the chemical PbLi/water reaction in the SIMMER code

    Energy Technology Data Exchange (ETDEWEB)

    Eboli, Marica, E-mail: marica.eboli@for.unipi.it [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Forgione, Nicola [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Del Nevo, Alessandro [ENEA FSN-ING-PAN, CR Brasimone, 40032 Camugnano, BO (Italy)

    2016-11-01

    Highlights: • Updated predictive capabilities of SIMMER-III code. • Verification of the implemented PbLi/Water chemical reactions. • Identification of code capabilities in modelling phenomena relevant to safety. • Validation against BLAST Test No. 5 experimental data successfully completed. • Need for new experimental campaign in support of code validation on LIFUS5/Mod3. - Abstract: The availability of a qualified system code for the deterministic safety analysis of the in-box LOCA postulated accident is of primary importance. Considering the renewed interest for the WCLL breeding blanket, such code shall be multi-phase, shall manage the thermodynamic interaction among the fluids, and shall include the exothermic chemical reaction between lithium-lead and water, generating oxides and hydrogen. The paper presents the implementation of the chemical correlations in SIMMER-III code, the verification of the code model in simple geometries and the first validation activity based on BLAST Test N°5 experimental data.

  12. The N400 as a snapshot of interactive processing: evidence from regression analyses of orthographic neighbor and lexical associate effects

    Science.gov (United States)

    Laszlo, Sarah; Federmeier, Kara D.

    2010-01-01

    Linking print with meaning tends to be divided into subprocesses, such as recognition of an input's lexical entry and subsequent access of semantics. However, recent results suggest that the set of semantic features activated by an input is broader than implied by a view wherein access serially follows recognition. EEG was collected from participants who viewed items varying in number and frequency of both orthographic neighbors and lexical associates. Regression analysis of single item ERPs replicated past findings, showing that N400 amplitudes are greater for items with more neighbors, and further revealed that N400 amplitudes increase for items with more lexical associates and with higher frequency neighbors or associates. Together, the data suggest that in the N400 time window semantic features of items broadly related to inputs are active, consistent with models in which semantic access takes place in parallel with stimulus recognition. PMID:20624252

  13. CORESAFE: A Formal Approach against Code Replacement Attacks on Cyber Physical Systems

    Science.gov (United States)

    2018-04-19

    AFRL-AFOSR-JP-TR-2018-0035 CORESAFE:A Formal Approach against Code Replacement Attacks on Cyber Physical Systems Sandeep Shukla INDIAN INSTITUTE OF...Formal Approach against Code Replacement Attacks on Cyber Physical Systems 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA2386-16-1-4099 5c.  PROGRAM ELEMENT...SUPPLEMENTARY NOTES 14.  ABSTRACT Industrial Control Systems (ICS) used in manufacturing, power generators and other critical infrastructure monitoring and

  14. [French norms of imagery for pictures, for concrete and abstract words].

    Science.gov (United States)

    Robin, Frédérique

    2006-09-01

    This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.

  15. Genome-wide identification of coding and non-coding conserved sequence tags in human and mouse genomes

    Directory of Open Access Journals (Sweden)

    Maggi Giorgio P

    2008-06-01

    Full Text Available Abstract Background The accurate detection of genes and the identification of functional regions is still an open issue in the annotation of genomic sequences. This problem affects new genomes but also those of very well studied organisms such as human and mouse where, despite the great efforts, the inventory of genes and regulatory regions is far from complete. Comparative genomics is an effective approach to address this problem. Unfortunately it is limited by the computational requirements needed to perform genome-wide comparisons and by the problem of discriminating between conserved coding and non-coding sequences. This discrimination is often based (thus dependent on the availability of annotated proteins. Results In this paper we present the results of a comprehensive comparison of human and mouse genomes performed with a new high throughput grid-based system which allows the rapid detection of conserved sequences and accurate assessment of their coding potential. By detecting clusters of coding conserved sequences the system is also suitable to accurately identify potential gene loci. Following this analysis we created a collection of human-mouse conserved sequence tags and carefully compared our results to reliable annotations in order to benchmark the reliability of our classifications. Strikingly we were able to detect several potential gene loci supported by EST sequences but not corresponding to as yet annotated genes. Conclusion Here we present a new system which allows comprehensive comparison of genomes to detect conserved coding and non-coding sequences and the identification of potential gene loci. Our system does not require the availability of any annotated sequence thus is suitable for the analysis of new or poorly annotated genomes.

  16. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  17. Development of CAP code for nuclear power plant containment: Lumped model

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon, E-mail: sjhong90@fnctech.com [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul [FNC Tech. Co. Ltd., Heungdeok 1 ro 13, Giheung-gu, Yongin-si, Gyeonggi-do 446-908 (Korea, Republic of); Ha, Sang Jun [Central Research Institute, Korea Hydro & Nuclear Power Company, Ltd., 70, 1312-gil, Yuseong-daero, Yuseong-gu, Daejeon 305-343 (Korea, Republic of)

    2015-09-15

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP.

  18. Development of CAP code for nuclear power plant containment: Lumped model

    International Nuclear Information System (INIS)

    Hong, Soon Joon; Choo, Yeon Joon; Hwang, Su Hyun; Lee, Byung Chul; Ha, Sang Jun

    2015-01-01

    Highlights: • State-of-art containment analysis code, CAP, has been developed. • CAP uses 3-field equations, water level oriented upwind scheme, local head model. • CAP has a function of linked calculation with reactor coolant system code. • CAP code assessments showed appropriate prediction capabilities. - Abstract: CAP (nuclear Containment Analysis Package) code has been developed in Korean nuclear society for the analysis of nuclear containment thermal hydraulic behaviors including pressure and temperature trends and hydrogen concentration. Lumped model of CAP code uses 2-phase, 3-field equations for fluid behaviors, and has appropriate constitutive equations, 1-dimensional heat conductor model, component models, trip and control models, and special process models. CAP can run in a standalone mode or a linked mode with a reactor coolant system analysis code. The linked mode enables the more realistic calculation of a containment response and is expected to be applicable to a more complicated advanced plant design calculation. CAP code assessments were carried out by gradual approaches: conceptual problems, fundamental phenomena, component and principal phenomena, experimental validation, and finally comparison with other code calculations on the base of important phenomena identifications. The assessments showed appropriate prediction capabilities of CAP

  19. The Representation of Abstract Words: What Matters? Reply to Paivio's (2013) Comment on Kousta et al. (2011)

    Science.gov (United States)

    Vigliocco, Gabriella; Kousta, Stavroula; Vinson, David; Andrews, Mark; Del Campo, Elena

    2013-01-01

    In Kousta, Vigliocco, Vinson, Andrews, and Del Campo (2011), we presented an embodied theory of semantic representation, which crucially included abstract concepts as internally embodied via affective states. Paivio (2013) took issue with our treatment of dual coding theory, our reliance on data from lexical decision, and our theoretical proposal.…

  20. Programme and abstracts

    International Nuclear Information System (INIS)

    1975-01-01

    Abstracts of 25 papers presented at the congress are given. The abstracts cover various topics including radiotherapy, radiopharmaceuticals, radioimmunoassay, health physics, radiation protection and nuclear medicine

  1. 2018 Congress Podium Abstracts

    Science.gov (United States)

    2018-02-21

    Each abstract has been indexed according to first author. Abstracts appear as they were submitted and have not undergone editing or the Oncology Nursing Forum’s review process. Only abstracts that will be presented appear here. For Congress scheduling information, visit congress.ons.org or check the Congress guide. Data published in abstracts presented at the ONS 43rd Annual Congress are embargoed until the conclusion of the presentation. Coverage and/or distribution of an abstract, poster, or any of its supplemental material to or by the news media, any commercial entity, or individuals, including the authors of said abstract, is strictly prohibited until the embargo is lifted. Promotion of general topics and speakers is encouraged within these guidelines.

  2. Serial and parallel processing in reading: investigating the effects of parafoveal orthographic information on nonisolated word recognition.

    Science.gov (United States)

    Dare, Natasha; Shillcock, Richard

    2013-01-01

    We present a novel lexical decision task and three boundary paradigm eye-tracking experiments that clarify the picture of parallel processing in word recognition in context. First, we show that lexical decision is facilitated by associated letter information to the left and right of the word, with no apparent hemispheric specificity. Second, we show that parafoveal preview of a repeat of word n at word n + 1 facilitates reading of word n relative to a control condition with an unrelated word at word n + 1. Third, using a version of the boundary paradigm that allowed for a regressive eye movement, we show no parafoveal "postview" effect on reading word n of repeating word n at word n - 1. Fourth, we repeat the second experiment but compare the effects of parafoveal previews consisting of a repeated word n with a transposed central bigram (e.g., caot for coat) and a substituted central bigram (e.g., ceit for coat), showing the latter to have a deleterious effect on processing word n, thereby demonstrating that the parafoveal preview effect is at least orthographic and not purely visual.

  3. Development of a general coupling interface for the fuel performance code TRANSURANUS – Tested with the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.; Macián-Juan, R.

    2015-01-01

    Highlights: • A general coupling interface was developed for couplings of the TRANSURANUS code. • With this new tool simplified fuel behavior models in codes can be replaced. • Applicable e.g. for several reactor types and from normal operation up to DBA. • The general coupling interface was applied to the reactor dynamics code DYN3D. • The new coupled code system DYN3D–TRANSURANUS was successfully tested for RIA. - Abstract: A general interface is presented for coupling the TRANSURANUS fuel performance code with thermal hydraulics system, sub-channel thermal hydraulics, computational fluid dynamics (CFD) or reactor dynamics codes. As first application the reactor dynamics code DYN3D was coupled at assembly level in order to describe the fuel behavior in more detail. In the coupling, DYN3D provides process time, time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, which in case of the two-way coupling approach transfers parameters like fuel temperature and cladding temperature back to DYN3D. Results of the coupled code system are presented for the reactivity transient scenario, initiated by control rod ejection. More precisely, the two-way coupling approach systematically calculates higher maximum values for the node fuel enthalpy. These differences can be explained thanks to the greater detail in fuel behavior modeling. The numerical performance for DYN3D–TRANSURANUS was proved to be fast and stable. The coupled code system can therefore improve the assessment of safety criteria, at a reasonable computational cost

  4. Jointly Decoded Raptor Codes: Analysis and Design for the BIAWGN Channel

    Directory of Open Access Journals (Sweden)

    Venkiah Auguste

    2009-01-01

    Full Text Available Abstract We are interested in the analysis and optimization of Raptor codes under a joint decoding framework, that is, when the precode and the fountain code exchange soft information iteratively. We develop an analytical asymptotic convergence analysis of the joint decoder, derive an optimization method for the design of efficient output degree distributions, and show that the new optimized distributions outperform the existing ones, both at long and moderate lengths. We also show that jointly decoded Raptor codes are robust to channel variation: they perform reasonably well over a wide range of channel capacities. This robustness property was already known for the erasure channel but not for the Gaussian channel. Finally, we discuss some finite length code design issues. Contrary to what is commonly believed, we show by simulations that using a relatively low rate for the precode , we can improve greatly the error floor performance of the Raptor code.

  5. From tracking code to analysis generalised Courant-Snyder theory for any accelerator model

    CERN Document Server

    Forest, Etienne

    2016-01-01

    This book illustrates a theory well suited to tracking codes, which the author has developed over the years. Tracking codes now play a central role in the design and operation of particle accelerators. The theory is fully explained step by step with equations and actual codes that the reader can compile and run with freely available compilers. In this book, the author pursues a detailed approach based on finite “s”-maps, since this is more natural as long as tracking codes remain at the center of accelerator design. The hierarchical nature of software imposes a hierarchy that puts map-based perturbation theory above any other methods. This is not a personal choice: it follows logically from tracking codes overloaded with a truncated power series algebra package. After defining abstractly and briefly what a tracking code is, the author illustrates most of the accelerator perturbation theory using an actual code: PTC. This book may seem like a manual for PTC; however, the reader is encouraged to explore...

  6. The OpenMOC method of characteristics neutral particle transport code

    International Nuclear Information System (INIS)

    Boyd, William; Shaner, Samuel; Li, Lulu; Forget, Benoit; Smith, Kord

    2014-01-01

    Highlights: • An open source method of characteristics neutron transport code has been developed. • OpenMOC shows nearly perfect scaling on CPUs and 30× speedup on GPUs. • Nonlinear acceleration techniques demonstrate a 40× reduction in source iterations. • OpenMOC uses modern software design principles within a C++ and Python framework. • Validation with respect to the C5G7 and LRA benchmarks is presented. - Abstract: The method of characteristics (MOC) is a numerical integration technique for partial differential equations, and has seen widespread use for reactor physics lattice calculations. The exponential growth in computing power has finally brought the possibility for high-fidelity full core MOC calculations within reach. The OpenMOC code is being developed at the Massachusetts Institute of Technology to investigate algorithmic acceleration techniques and parallel algorithms for MOC. OpenMOC is a free, open source code written using modern software languages such as C/C++ and CUDA with an emphasis on extensible design principles for code developers and an easy to use Python interface for code users. The present work describes the OpenMOC code and illustrates its ability to model large problems accurately and efficiently

  7. 2018 Congress Poster Abstracts

    Science.gov (United States)

    2018-02-21

    Each abstract has been indexed according to the first author. Abstracts appear as they were submitted and have not undergone editing or the Oncology Nursing Forum’s review process. Only abstracts that will be presented appear here. Poster numbers are subject to change. For updated poster numbers, visit congress.ons.org or check the Congress guide. Data published in abstracts presented at the ONS 43rd Annual Congress are embargoed until the conclusion of the presentation. Coverage and/or distribution of an abstract, poster, or any of its supplemental material to or by the news media, any commercial entity, or individuals, including the authors of said abstract, is strictly prohibited until the embargo is lifted. Promotion of general topics and speakers is encouraged within these guidelines.

  8. The role of orthography in the semantic activation of neighbors.

    Science.gov (United States)

    Hino, Yasushi; Lupker, Stephen J; Taylor, Tamsen E

    2012-09-01

    There is now considerable evidence that a letter string can activate semantic information appropriate to its orthographic neighbors (e.g., Forster & Hector's, 2002, TURPLE effect). This phenomenon is the focus of the present research. Using Japanese words, we examined whether semantic activation of neighbors is driven directly by orthographic similarity alone or whether there is also a role for phonological similarity. In Experiment 1, using a relatedness judgment task in which a Kanji word-Katakana word pair was presented on each trial, an inhibitory effect was observed when the initial Kanji word was related to an orthographic and phonological neighbor of the Katakana word target but not when the initial Kanji word was related to a phonological but not orthographic neighbor of the Katakana word target. This result suggests that phonology plays little, if any, role in the activation of neighbors' semantics when reading familiar words. In Experiment 2, the targets were transcribed into Hiragana, a script they are typically not written in, requiring readers to engage in phonological coding. In that experiment, inhibitory effects were observed in both conditions. This result indicates that phonologically mediated semantic activation of neighbors will emerge when phonological processing is necessary in order to understand a written word (e.g., when that word is transcribed into an unfamiliar script). PsycINFO Database Record (c) 2012 APA, all rights reserved.

  9. Impact of orthographic transparency on typical and atypical reading development: evidence in French-Spanish bilingual children.

    Science.gov (United States)

    Lallier, Marie; Valdois, Sylviane; Lassus-Sangosse, Delphine; Prado, Chloé; Kandel, Sonia

    2014-05-01

    The present study aimed to quantify cross-linguistic modulations of the contribution of phonemic awareness skills and visual attention span (VA Span) skills (number of visual elements that can be processed simultaneously) to reading speed and accuracy in 18 Spanish-French balanced bilingual children with and without developmental dyslexia. The children were administered two similar reading batteries in French and Spanish. The deficits of the dyslexic children in reading accuracy were mainly visible in their opaque orthography (French) whereas difficulties indexed by reading speed were observed in both their opaque and transparent orthographies. Dyslexic children did not exhibit any phonemic awareness problems in French or in Spanish, but showed poor VA Span skills compared to their control peers. VA span skills correlated with reading accuracy and speed measures in both Spanish and French, whereas phonemic awareness correlated with reading accuracy only. Overall, the present results show that the VA Span is tightly related to reading speed regardless of orthographic transparency, and that it accounts for differences in reading performance between good and poor readers across languages. The present findings further suggest that VA Span skills may play a particularly important role in building-up specific word knowledge which is critical for lexical reading strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Assessment of Recovery of Damages in the New Romanian Civil Code

    Directory of Open Access Journals (Sweden)

    Ion Țuțuianu

    2016-01-01

    Full Text Available AbstractThe subject’s approach is required also because, once adopted the New Civil Code, it acquired a new juridical frame, but also a new perspective. A common law creditor who does not obtain the direct execution  of his obligation is entitled to be compensated for the damage caused by the non-execution  with an amount of money which is equivalent to the benefit that the exact, total, and duly execution  of the obligation would have brought the creditor.Keywords:  interest, damages, civil code, juridical responsibility

  11. Compilation of documented computer codes applicable to environmental assessment of radioactivity releases

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Miller, C.W.; Shaeffer, D.L.; Garten, C.T. Jr.; Shor, R.W.; Ensminger, J.T.

    1977-04-01

    The objective of this paper is to present a compilation of computer codes for the assessment of accidental or routine releases of radioactivity to the environment from nuclear power facilities. The capabilities of 83 computer codes in the areas of environmental transport and radiation dosimetry are summarized in tabular form. This preliminary analysis clearly indicates that the initial efforts in assessment methodology development have concentrated on atmospheric dispersion, external dosimetry, and internal dosimetry via inhalation. The incorporation of terrestrial and aquatic food chain pathways has been a more recent development and reflects the current requirements of environmental legislation and the needs of regulatory agencies. The characteristics of the conceptual models employed by these codes are reviewed. The appendixes include abstracts of the codes and indexes by author, key words, publication description, and title

  12. Tiling as a Durable Abstraction for Parallelism and Data Locality

    Energy Technology Data Exchange (ETDEWEB)

    Unat, Didem [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chan, Cy P. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Zhang, Weiqun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2013-11-18

    Tiling is a useful loop transformation for expressing parallelism and data locality. Automated tiling transformations that preserve data-locality are increasingly important due to hardware trends towards massive parallelism and the increasing costs of data movement relative to the cost of computing. We propose TiDA as a durable tiling abstraction that centralizes parameterized tiling information within array data types with minimal changes to the source code. The data layout information can be used by the compiler and runtime to automatically manage parallelism, optimize data locality, and schedule tasks intelligently. In this study, we present the design features and early interface of TiDA along with some preliminary results.

  13. A coupled systems code-CFD MHD solver for fusion blanket design

    Energy Technology Data Exchange (ETDEWEB)

    Wolfendale, Michael J., E-mail: m.wolfendale11@imperial.ac.uk; Bluck, Michael J.

    2015-10-15

    Highlights: • A coupled systems code-CFD MHD solver for fusion blanket applications is proposed. • Development of a thermal hydraulic systems code with MHD capabilities is detailed. • A code coupling methodology based on the use of TCP socket communications is detailed. • Validation cases are briefly discussed for the systems code and coupled solver. - Abstract: The network of flow channels in a fusion blanket can be modelled using a 1D thermal hydraulic systems code. For more complex components such as junctions and manifolds, the simplifications employed in such codes can become invalid, requiring more detailed analyses. For magnetic confinement reactor blanket designs using a conducting fluid as coolant/breeder, the difficulties in flow modelling are particularly severe due to MHD effects. Blanket analysis is an ideal candidate for the application of a code coupling methodology, with a thermal hydraulic systems code modelling portions of the blanket amenable to 1D analysis, and CFD providing detail where necessary. A systems code, MHD-SYS, has been developed and validated against existing analyses. The code shows good agreement in the prediction of MHD pressure loss and the temperature profile in the fluid and wall regions of the blanket breeding zone. MHD-SYS has been coupled to an MHD solver developed in OpenFOAM and the coupled solver validated for test geometries in preparation for modelling blanket systems.

  14. Towards provably correct code generation for a hard real-time programming language

    DEFF Research Database (Denmark)

    Fränzle, Martin; Müller-Olm, Markus

    1994-01-01

    This paper sketches a hard real-time programming language featuring operators for expressing timeliness requirements in an abstract, implementation-independent way and presents parts of the design and verification of a provably correct code generator for that language. The notion of implementation...

  15. Method and device for fast code acquisition in spread spectrum receivers

    NARCIS (Netherlands)

    Coenen, A.J.R.M.

    1993-01-01

    Abstract of NL 9101155 (A) Method for code acquisition in a satellite receiver. The biphase-modulated high-frequency carrier transmitted by a satellite is converted via a fixed local oscillator frequency down to the baseband, whereafter the baseband signal is fed via a bandpass filter, which has an

  16. The Need to Remove the Civil Code from Mexican Commercial Laws: The Case of “Offers” and “Firm Promises”

    OpenAIRE

    Iturralde González, Raúl

    2017-01-01

    Abstract: In 1889, then Mexican President Porfirio Díaz enacted the Mexican Commercial Code that is still in force today. This code was inspired on the Napoleonic code of 1807. Unfortunately, the Mexican code eliminated the use of commercial customs and practices as an accepted method for breaching gaps in commercial law. Since then, Mexican commercial law has held the civil code as the basis for dealing with gaps and loopholes in the application of commercial law. This has prevented the furt...

  17. Recent developments of JAEA’s Monte Carlo code MVP for reactor physics applications

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Okumura, Keisuke; Mori, Takamasa

    2015-01-01

    Highlights: • This paper describes the recent development status of the Monte Carlo code MVP. • The basic features and capabilities of MVP are briefly described. • New capabilities useful for reactor analysis are also described. - Abstract: This paper describes the recent development status of a Monte Carlo code MVP developed at Japan Atomic Energy Agency. The basic features and capabilities of MVP are overviewed. In addition, new capabilities useful for reactor analysis are also described

  18. Workshop on Smart Structures (1st) Held at The University of Texas at Arlington on September 22-24 1993. Collection of Extended Abstracts

    Science.gov (United States)

    1994-06-01

    NUMBER OF PAGES Workshop, Smart Structures, Advanced MagerLias, Netowrks , Neural Networks, Materials, Memory Alloys 16. PRICE CODE 17. SECURITY ...CLASSIFICATION 18. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT OF REPORT OF THIS PAGE OF ABSTRACT UNCLASSIFIED UNCLASSIFIED...separate bolt secures the actuator to the end piece, keeping the end rigidly constrained. At the tip of the magnetostrictive actuator (i.e. the push

  19. Many Neighbors are not Silent. fMRI Evidence for Global Lexical Activity in Visual Word Recognition.

    Directory of Open Access Journals (Sweden)

    Mario eBraun

    2015-07-01

    Full Text Available Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism and based on local lexico-semantic activity.

  20. Program and abstracts

    Energy Technology Data Exchange (ETDEWEB)

    1975-01-01

    Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled:Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Physics Education; SANCGASS; Astronomy; Plasma Physics; Physics in Industry; Applied and General Physics.

  1. Program and abstracts

    International Nuclear Information System (INIS)

    1975-01-01

    Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled:Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Physics Education; SANCGASS; Astronomy; Plasma Physics; Physics in Industry; Applied and General Physics

  2. Program and abstracts

    International Nuclear Information System (INIS)

    1976-01-01

    Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled: Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Plasma Physics; Solar-Terrestrial Physics; Astrophysics and Astronomy; Radioastronomy; General Physics; Applied Physics; Industrial Physics

  3. Introduction to abstract algebra

    CERN Document Server

    Nicholson, W Keith

    2012-01-01

    Praise for the Third Edition ". . . an expository masterpiece of the highest didactic value that has gained additional attractivity through the various improvements . . ."-Zentralblatt MATH The Fourth Edition of Introduction to Abstract Algebra continues to provide an accessible approach to the basic structures of abstract algebra: groups, rings, and fields. The book's unique presentation helps readers advance to abstract theory by presenting concrete examples of induction, number theory, integers modulo n, and permutations before the abstract structures are defined. Readers can immediately be

  4. Abstracting Concepts and Methods.

    Science.gov (United States)

    Borko, Harold; Bernier, Charles L.

    This text provides a complete discussion of abstracts--their history, production, organization, publication--and of indexing. Instructions for abstracting are outlined, and standards and criteria for abstracting are stated. Management, automation, and personnel are discussed in terms of possible economies that can be derived from the introduction…

  5. The development and application of a sub-channel code in ocean environment

    International Nuclear Information System (INIS)

    Wu, Pan; Shan, Jianqiang; Xiang, Xiong; Zhang, Bo; Gou, Junli; Zhang, Bin

    2016-01-01

    Highlights: • A sub-channel code named ATHAS/OE is developed for nuclear reactors in ocean environment. • ATHAS/OE is verified by another modified sub-channel code based on COBRA-IV. • ATHAS/OE is used to analyze thermal hydraulic of a typical SMR in heaving and rolling motion. • Calculation results show that ocean condition affect the thermal hydraulic of a reactor significantly. - Abstract: An upgraded version of ATHAS sub-channel code ATHAS/OE is developed for the investigation of the thermal hydraulic behavior of nuclear reactor core in ocean environment with consideration of heaving and rolling motion effect. The code is verified by another modified sub-channel code based on COBRA-IV and used to analyze the thermal hydraulic characteristics of a typical SMR under heaving and rolling motion condition. The calculation results show that the heaving and rolling motion affect the thermal hydraulic behavior of a reactor significantly.

  6. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  7. An Analysis of the Relationship between IFAC Code of Ethics and CPI

    Directory of Open Access Journals (Sweden)

    Ayşe İrem Keskin

    2015-11-01

    Full Text Available Abstract Code of ethics has become a significant concept as regards to the business world. That is why occupational organizations have developed their own codes of ethics over time. In this study, primarily the compatibility classification of the accounting code of ethics belonging to the IFAC (The International Federation of Accountants is carried out on the basis of the action plans assessing the levels of usage by the 175 IFAC national accounting organizations. It is determined as a result of the classification that 60,6% of the member organizations are applying the IFAC code in general, the rest 39,4% on the other hand, is not applying the code at all. With this classification, the hypothesis propounding that “The national accounting organizations in highly corrupt countries would be less likely to adopt the IFAC ethic code than those in very clean countries,” is tested using the “Corruption Perception Index-CPI” data. It is determined that the findings support this relevant hypothesis.          

  8. The Vulnerability Assessment Code for Physical Protection System

    International Nuclear Information System (INIS)

    Jang, Sung Soon; Yoo, Ho Sik

    2007-01-01

    To neutralize the increasing terror threats, nuclear facilities have strong physical protection system (PPS). PPS includes detectors, door locks, fences, regular guard patrols, and a hot line to a nearest military force. To design an efficient PPS and to fully operate it, vulnerability assessment process is required. Evaluating PPS of a nuclear facility is complicate process and, hence, several assessment codes have been developed. The estimation of adversary sequence interruption (EASI) code analyzes vulnerability along a single intrusion path. To evaluate many paths to a valuable asset in an actual facility, the systematic analysis of vulnerability to intrusion (SAVI) code was developed. KAERI improved SAVI and made the Korean analysis of vulnerability to intrusion (KAVI) code. Existing codes (SAVI and KAVI) have limitations in representing the distance of a facility because they use the simplified model of a PPS called adversary sequence diagram. In adversary sequence diagram the position of doors, sensors and fences is described just as the locating area. Thus, the distance between elements is inaccurate and we cannot reflect the range effect of sensors. In this abstract, we suggest accurate and intuitive vulnerability assessment based on raster map modeling of PPS. The raster map of PPS accurately represents the relative position of elements and, thus, the range effect of sensor can be easily incorporable. Most importantly, the raster map is easy to understand

  9. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  10. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  11. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  12. Completeness of Lyapunov Abstraction

    Directory of Open Access Journals (Sweden)

    Rafael Wisniewski

    2013-08-01

    Full Text Available In this work, we continue our study on discrete abstractions of dynamical systems. To this end, we use a family of partitioning functions to generate an abstraction. The intersection of sub-level sets of the partitioning functions defines cells, which are regarded as discrete objects. The union of cells makes up the state space of the dynamical systems. Our construction gives rise to a combinatorial object - a timed automaton. We examine sound and complete abstractions. An abstraction is said to be sound when the flow of the time automata covers the flow lines of the dynamical systems. If the dynamics of the dynamical system and the time automaton are equivalent, the abstraction is complete. The commonly accepted paradigm for partitioning functions is that they ought to be transversal to the studied vector field. We show that there is no complete partitioning with transversal functions, even for particular dynamical systems whose critical sets are isolated critical points. Therefore, we allow the directional derivative along the vector field to be non-positive in this work. This considerably complicates the abstraction technique. For understanding dynamical systems, it is vital to study stable and unstable manifolds and their intersections. These objects appear naturally in this work. Indeed, we show that for an abstraction to be complete, the set of critical points of an abstraction function shall contain either the stable or unstable manifold of the dynamical system.

  13. A Message Without a Code?

    Directory of Open Access Journals (Sweden)

    Tom Conley

    1981-01-01

    Full Text Available The photographic paradox is said to be that of a message without a code, a communication lacking a relay or gap essential to the process of communication. Tracing the recurrence of Barthes's definition in the essays included in Image/Music/Text and in La Chambre claire , this paper argues that Barthes's definition is platonic in its will to dematerialize the troubling — graphic — immediacy of the photograph. He writes of the image in order to flee its signature. As a function of media, his categories are written in order to be insufficient and inadequate; to maintain an ineluctable difference between language heard and letters seen; to protect an idiom of loss which the photograph disallows. The article studies the strategies of his definition in «The Photographic Paradox» as instrument of abstraction, opposes the notion of code, in an aural sense, to audio-visual markers of closed relay in advertising, and critiques the layout and order of La Chambre claire in respect to Barthes's ideology of absence.

  14. Author Details

    African Journals Online (AJOL)

    Banda, Felix. Vol 38, No 1 (2018) - Articles Voicing in non-click consonants and orthographic design in Khoekhoegowab Abstract. ISSN: 0257-2117. AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use ...

  15. Abstract algebra

    CERN Document Server

    Garrett, Paul B

    2007-01-01

    Designed for an advanced undergraduate- or graduate-level course, Abstract Algebra provides an example-oriented, less heavily symbolic approach to abstract algebra. The text emphasizes specifics such as basic number theory, polynomials, finite fields, as well as linear and multilinear algebra. This classroom-tested, how-to manual takes a more narrative approach than the stiff formalism of many other textbooks, presenting coherent storylines to convey crucial ideas in a student-friendly, accessible manner. An unusual feature of the text is the systematic characterization of objects by universal

  16. Nuclear structure references coding manual

    International Nuclear Information System (INIS)

    Ramavataram, S.; Dunford, C.L.

    1984-02-01

    This manual is intended as a guide to Nuclear Structure References (NSR) compilers. The basic conventions followed at the National Nuclear Data Center (NNDC), which are compatible with the maintenance and updating of and retrieval from the Nuclear Structure References (NSR) file, are outlined. The structure of the NSR file such as the valid record identifiers, record contents, text fields as well as the major topics for which [KEYWORDS] are prepared are ennumerated. Relevant comments regarding a new entry into the NSR file, assignment of [KEYNO ], generation of [SELECTRS] and linkage characteristics are also given. A brief definition of the Keyword abstract is given followed by specific examples; for each TOPIC, the criteria for inclusion of an article as an entry into the NSR file as well as coding procedures are described. Authors submitting articles to Journals which require Keyword abstracts should follow the illustrations. The scope of the literature covered at NNDC, the categorization into Primary and Secondary sources, etc. is discussed. Useful information regarding permitted character sets, recommended abbreviations, etc. is given

  17. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  18. Types: A data abstraction package in FORTRAN

    International Nuclear Information System (INIS)

    Youssef, S.

    1990-01-01

    TYPES is a collection of Fortran programs which allow the creation and manipulation of abstract ''data objects'' without the need for a preprocessor. Each data object is assigned a ''type'' as it is created which implies participation in a set of characteristic operations. Available types include scalars, logicals, ordered sets, stacks, queues, sequences, trees, arrays, character strings, block text, histograms, virtual and allocatable memories. A data object may contain integers, reals, or other data objects in any combination. In addition to the type specific operations, a set of universal utilities allows for copying input/output to disk, naming, editing, displaying, user input, interactive creation, tests for equality of contents or structure, machine to machine translation or source code creation for and data object. TYPES is available on VAX/VMS, SUN 3, SPARC, DEC/Ultrix, Silicon Graphics 4D and Cray/Unicos machines. The capabilities of the package are discussed together with characteristic applications and experience in writing the GVerify package

  19. Accuracy assessment of a new Monte Carlo based burnup computer code

    International Nuclear Information System (INIS)

    El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.

    2012-01-01

    Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.

  20. Effect of orthographic processes on letter-identity and letter-position encoding in dyslexic children

    Directory of Open Access Journals (Sweden)

    Caroline eReilhac

    2012-05-01

    Full Text Available The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter-identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological-age (CA and reading-age (RA-matched control groups. Children had to judge whether two successively and briefly presented 4-letter-strings were identical or different. Letter-position and letter-identity were manipulated through the transposition (e.g., RTGM vs. RMGT or substitution of two letters (e.g., TSHF vs. TGHD. Non-words, pseudo-words and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA controls and to all type of items in CA controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA controls. Overall, the current findings point to a strong letter identity and letter position encoding disorder in developmental dyslexia.

  1. Abstracts and abstracting a genre and set of skills for the twenty-first century

    CERN Document Server

    Koltay, Tibor

    2010-01-01

    Despite their changing role, abstracts remain useful in the digital world. Highly beneficial to information professionals and researchers who work and publish in different fields, this book summarizes the most important and up-to-date theory of abstracting, as well as giving advice and examples for the practice of writing different kinds of abstracts. The book discusses the length, the functions and basic structure of abstracts, outlining a new approach to informative and indicative abstracts. The abstractors' personality, their linguistic and non-linguistic knowledge and skills are also discu

  2. Truthful Monadic Abstractions

    DEFF Research Database (Denmark)

    Brock-Nannestad, Taus; Schürmann, Carsten

    2012-01-01

    indefinitely, finding neither a proof nor a disproof of a given subgoal. In this paper we characterize a family of truth-preserving abstractions from intuitionistic first-order logic to the monadic fragment of classical first-order logic. Because they are truthful, these abstractions can be used to disprove...

  3. Abstracts of papers from the literature on anticipated transients without scram for light water reactors 1. 1975-1979

    International Nuclear Information System (INIS)

    Kinnersley, S.R.

    1981-05-01

    INIS ATOMINDEX abstracts relating to ATWS for light water reactors for the years 1975-1979 are presented under the subject headings of; general, licensing and standards, models and computer codes, frequency of occurrence of ATWS, transient calculations of results including probabilistic analysis, radiological consequences of ATWS, fuel behaviour, and studies of plant components. (U.K.)

  4. Deontological aspects of the nursing profession: understanding the code of ethics

    Directory of Open Access Journals (Sweden)

    Terezinha Nunes da Silva

    Full Text Available ABSTRACT Objective: to investigate nursing professionals' understanding concerning the Code of Ethics; to assess the relevance of the Code of Ethics of the nursing profession and its use in practice; to identify how problem-solving is performed when facing ethical dilemmas in professional practice. Method: exploratory descriptive study, conducted with 34 (thirty-four nursing professionals from a teaching hospital in João Pessoa, PB - Brazil. Results: four thematic categories emerged: conception of professional ethics in nursing practice; interpretations of ethics in the practice of care; use of the Code of Ethics in the professional practice; strategies for solving ethical issues in the professional practice. Final considerations: some of the nursing professionals comprehend the meaning coherently; others have a limited comprehension, based on jargon. Therefore, a deeper understanding of the text contained in this code is necessary so that it can be applied into practice, aiming to provide a quality care that is, above all, ethical and legal.

  5. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  6. The "Wow! signal" of the terrestrial genetic code

    Science.gov (United States)

    shCherbak, Vladimir I.; Makukov, Maxim A.

    2013-05-01

    It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of

  7. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  8. MEMOPS: data modelling and automatic code generation.

    Science.gov (United States)

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D

    2010-03-25

    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  9. Abstracts

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The Western Theories of War Ethics and Contemporary Controversies Li Xiaodong U Ruijing (4) [ Abstract] In the field of international relations, war ethics is a concept with distinct westem ideological color. Due to factors of history and reality, the in

  10. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  11. Software-defined network abstractions and configuration interfaces for building programmable quantum networks

    Energy Technology Data Exchange (ETDEWEB)

    Dasari, Venkat [U.S. Army Research Laboratory, Aberdeen Proving Ground, MD; Sadlier, Ronald J [ORNL; Geerhart, Mr. Billy [U.S. Army Research Laboratory, Aberdeen Proving Ground, MD; Snow, Nikolai [U.S. Army Research Laboratory, Aberdeen Proving Ground, MD; Williams, Brian P [ORNL; Humble, Travis S [ORNL

    2017-01-01

    Well-defined and stable quantum networks are essential to realize functional quantum applications. Quantum networks are complex and must use both quantum and classical channels to support quantum applications like QKD, teleportation, and superdense coding. In particular, the no-cloning theorem prevents the reliable copying of quantum signals such that the quantum and classical channels must be highly coordinated using robust and extensible methods. We develop new network abstractions and interfaces for building programmable quantum networks. Our approach leverages new OpenFlow data structures and table type patterns to build programmable quantum networks and to support quantum applications.

  12. Automatic Structure-Based Code Generation from Coloured Petri Nets

    DEFF Research Database (Denmark)

    Kristensen, Lars Michael; Westergaard, Michael

    2010-01-01

    Automatic code generation based on Coloured Petri Net (CPN) models is challenging because CPNs allow for the construction of abstract models that intermix control flow and data processing, making translation into conventional programming constructs difficult. We introduce Process-Partitioned CPNs...... (PP-CPNs) which is a subclass of CPNs equipped with an explicit separation of process control flow, message passing, and access to shared and local data. We show how PP-CPNs caters for a four phase structure-based automatic code generation process directed by the control flow of processes....... The viability of our approach is demonstrated by applying it to automatically generate an Erlang implementation of the Dynamic MANET On-demand (DYMO) routing protocol specified by the Internet Engineering Task Force (IETF)....

  13. Clinical coding of prospectively identified paediatric adverse drug reactions--a retrospective review of patient records.

    Science.gov (United States)

    Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir

    2014-12-17

    National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.

  14. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  15. Imaginal, semantic, and surface-level processing of concrete and abstract words: an electrophysiological investigation.

    Science.gov (United States)

    West, W C; Holcomb, P J

    2000-11-01

    Words representing concrete concepts are processed more quickly and efficiently than words representing abstract concepts. Concreteness effects have also been observed in studies using event-related brain potentials (ERPs). The aim of this study was to examine concrete and abstract words using both reaction time (RT) and ERP measurements to determine (1) at what point in the stream of cognitive processing concreteness effects emerge and (2) how different types of cognitive operations influence these concreteness effects. Three groups of subjects performed a sentence verification task in which the final word of each sentence was concrete or abstract. For each group the truthfulness judgment required either (1) image generation, (2) semantic decision, or (3) evaluation of surface characteristics. Concrete and abstract words produced similar RTs and ERPs in the surface task, suggesting that postlexical semantic processing is necessary to elicit concreteness effects. In both the semantic and imagery tasks, RTs were shorter for concrete than for abstract words. This difference was greatest in the imagery task. Also, in both of these tasks concrete words elicited more negative ERPs than abstract words between 300 and 550 msec (N400). This effect was widespread across the scalp and may reflect activation in a linguistic semantic system common to both concrete and abstract words. ERPs were also more negative for concrete than abstract words between 550 and 800 msec. This effect was more frontally distributed and was most evident in the imagery task. We propose that this later anterior effect represents a distinct ERP component (N700) that is sensitive to the use of mental imagery. The N700 may reflect the a access of specific characteristics of the imaged item or activation in a working memory system specific to mental imagery. These results also support the extended dual-coding hypothesis that superior associative connections and the use of mental imagery both contribute

  16. Check Sample Abstracts.

    Science.gov (United States)

    Alter, David; Grenache, David G; Bosler, David S; Karcher, Raymond E; Nichols, James; Rajadhyaksha, Aparna; Camelo-Piragua, Sandra; Rauch, Carol; Huddleston, Brent J; Frank, Elizabeth L; Sluss, Patrick M; Lewandrowski, Kent; Eichhorn, John H; Hall, Janet E; Rahman, Saud S; McPherson, Richard A; Kiechle, Frederick L; Hammett-Stabler, Catherine; Pierce, Kristin A; Kloehn, Erica A; Thomas, Patricia A; Walts, Ann E; Madan, Rashna; Schlesinger, Kathie; Nawgiri, Ranjana; Bhutani, Manoop; Kanber, Yonca; Abati, Andrea; Atkins, Kristen A; Farrar, Robert; Gopez, Evelyn Valencerina; Jhala, Darshana; Griffin, Sonya; Jhala, Khushboo; Jhala, Nirag; Bentz, Joel S; Emerson, Lyska; Chadwick, Barbara E; Barroeta, Julieta E; Baloch, Zubair W; Collins, Brian T; Middleton, Owen L; Davis, Gregory G; Haden-Pinneri, Kathryn; Chu, Albert Y; Keylock, Joren B; Ramoso, Robert; Thoene, Cynthia A; Stewart, Donna; Pierce, Arand; Barry, Michelle; Aljinovic, Nika; Gardner, David L; Barry, Michelle; Shields, Lisa B E; Arnold, Jack; Stewart, Donna; Martin, Erica L; Rakow, Rex J; Paddock, Christopher; Zaki, Sherif R; Prahlow, Joseph A; Stewart, Donna; Shields, Lisa B E; Rolf, Cristin M; Falzon, Andrew L; Hudacki, Rachel; Mazzella, Fermina M; Bethel, Melissa; Zarrin-Khameh, Neda; Gresik, M Vicky; Gill, Ryan; Karlon, William; Etzell, Joan; Deftos, Michael; Karlon, William J; Etzell, Joan E; Wang, Endi; Lu, Chuanyi M; Manion, Elizabeth; Rosenthal, Nancy; Wang, Endi; Lu, Chuanyi M; Tang, Patrick; Petric, Martin; Schade, Andrew E; Hall, Geraldine S; Oethinger, Margret; Hall, Geraldine; Picton, Avis R; Hoang, Linda; Imperial, Miguel Ranoa; Kibsey, Pamela; Waites, Ken; Duffy, Lynn; Hall, Geraldine S; Salangsang, Jo-Anne M; Bravo, Lulette Tricia C; Oethinger, Margaret D; Veras, Emanuela; Silva, Elvia; Vicens, Jimena; Silva, Elvio; Keylock, Joren; Hempel, James; Rushing, Elizabeth; Posligua, Lorena E; Deavers, Michael T; Nash, Jason W; Basturk, Olca; Perle, Mary Ann; Greco, Alba; Lee, Peng; Maru, Dipen; Weydert, Jamie Allen; Stevens, Todd M; Brownlee, Noel A; Kemper, April E; Williams, H James; Oliverio, Brock J; Al-Agha, Osama M; Eskue, Kyle L; Newlands, Shawn D; Eltorky, Mahmoud A; Puri, Puja K; Royer, Michael C; Rush, Walter L; Tavora, Fabio; Galvin, Jeffrey R; Franks, Teri J; Carter, James Elliot; Kahn, Andrea Graciela; Lozada Muñoz, Luis R; Houghton, Dan; Land, Kevin J; Nester, Theresa; Gildea, Jacob; Lefkowitz, Jerry; Lacount, Rachel A; Thompson, Hannis W; Refaai, Majed A; Quillen, Karen; Lopez, Ana Ortega; Goldfinger, Dennis; Muram, Talia; Thompson, Hannis

    2009-02-01

    The following abstracts are compiled from Check Sample exercises published in 2008. These peer-reviewed case studies assist laboratory professionals with continuing medical education and are developed in the areas of clinical chemistry, cytopathology, forensic pathology, hematology, microbiology, surgical pathology, and transfusion medicine. Abstracts for all exercises published in the program will appear annually in AJCP.

  17. Development of Monte Carlo-based pebble bed reactor fuel management code

    International Nuclear Information System (INIS)

    Setiadipura, Topan; Obara, Toru

    2014-01-01

    Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed

  18. Building a dynamic code to simulate new reactor concepts

    International Nuclear Information System (INIS)

    Catsaros, N.; Gaveau, B.; Jaekel, M.-T.; Maillard, J.; Maurel, G.; Savva, P.; Silva, J.; Varvayanni, M.

    2012-01-01

    Highlights: ► We develop a stochastic neutronic code based on an existing High Energy Physics code. ► The code simulates innovative reactor designs including Accelerator Driven Systems. ► Core materials evolution will be dynamically simulated, including fuel burnup. ► Continuous feedback between the main inter-related parameters will be established. ► A description of the current research development and achievements is also given. - Abstract: Innovative nuclear reactor designs have been proposed, such as the Accelerator Driven Systems (ADSs), the “candle” reactors, etc. These reactor designs introduce computational nuclear technology problems the solution of which necessitates a new, global and dynamic computational approach of the system. A continuous feedback procedure must be established between the main inter-related parameters of the system such as the chemical, physical and isotopic composition of the core, the neutron flux distribution and the temperature field. Furthermore, as far as ADSs are concerned, the ability of the computational tool to simulate the nuclear cascade created from the interaction of accelerated protons with the spallation target as well as the produced neutrons, is also required. The new Monte Carlo code ANET (Advanced Neutronics with Evolution and Thermal hydraulic feedback) is being developed based on the GEANT3 High Energy Physics code, aiming to progressively satisfy all the above requirements. A description of the capabilities and methodologies implemented in the present version of ANET is given here, together with some illustrative applications of the code.

  19. Does a code make a difference – assessing the English code of practice on international recruitment

    Directory of Open Access Journals (Sweden)

    Mensah Kwadwo

    2009-04-01

    Full Text Available Abstract Background This paper draws from research completed in 2007 to assess the effect of the Department of Health, England, Code of Practice for the international recruitment of health professionals. The Department of Health in England introduced a Code of Practice for international recruitment for National Health Service employers in 2001. The Code required National Health Service employers not to actively recruit from low-income countries, unless there was government-to-government agreement. The Code was updated in 2004. Methods The paper examines trends in inflow of health professionals to the United Kingdom from other countries, using professional registration data and data on applications for work permits. The paper also provides more detailed information from two country case studies in Ghana and Kenya. Results Available data show a considerable reduction in inflow of health professionals, from the peak years up to 2002 (for nurses and 2004 (for doctors. There are multiple causes for this decline, including declining demand in the United Kingdom. In Ghana and Kenya it was found that active recruitment was perceived to have reduced significantly from the United Kingdom, but it is not clear the extent to which the Code was influential in this, or whether other factors such as a lack of vacancies in the United Kingdom explains it. Conclusion Active international recruitment of health professionals was an explicit policy intervention by the Department of Health in England, as one key element in achieving rapid staffing growth, particularly in the period 2000 to 2005, but the level of international recruitment has dropped significantly since early 2006. Regulatory and education changes in the United Kingdom in recent years have also made international entry more difficult. The potential to assess the effect of the Code in England is constrained by the limitations in available databases. This is a crucial lesson for those considering a

  20. Abstract Datatypes in PVS

    Science.gov (United States)

    Owre, Sam; Shankar, Natarajan

    1997-01-01

    PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.

  1. POLA PENGELOLAAN SANITASI DI PERKAMPUNGAN BANTARAN SUNGAI CODE, YOGYAKARTA (Pattern of Sanitation Management in Code Riverside Settlements, Yogyakarta

    Directory of Open Access Journals (Sweden)

    Atyanto Dharoko

    2005-11-01

    Full Text Available ABSTRAK Bantaran Sungai Code merupakan wilayah pusat kota Yogyakarta yang dipenuhi oleh perkampungan padat penduduknya. Sistem kehidupan masyarakat kampung bantaran Sungai Code sudah terintegrasi dengan kehidupan sosial ekonomi masyarakat kota Yogyakarta. Permasalahan yang muncul adalah rendahnya kualitas intrastruktur terutama fasilitas sanitasi karena kendala terbatasnya kemampuan ekonomi masyarakat dan bentuk topograti yang terjal. Akhirnya sungai merupakan tujuan pembuangan akhir limbah sanitasi lingkungan tanpa proses terlebih dahulu. Penelitian ini menyimpulkan bahwa pola sanitasi komunal lebih dapat diterima oleh masyarakat dari pertimbangan sosial, ekonomi dan kondisi lingkungan yang terjal. Di masa mendatang sistem ini perlu dijadikan dasar pengembangan teknis sistem sanitasi bantaran sungai untuk memperoleh sustainability yang tinggi.   ABSTRACT Code riverside is part of central business district in Yogyakarta composed by densely populated kampungs. Community way of life in the kampungs have been successfully integrated with social-economic of the urban community. The crusial problem faced by the community is lack of infrastructure facilities especially sanitation. This situation is very much related to social-economic constraints of the community and topographical situation as fisical constraints. Finally, sanitation disposals have to be discharged into Code River without pre processing. The study concludes that communal sanitation system becomes the most acceptable system based on socio-economic and topographical constraints. In the future communal sanitation system may become a basic technical considerations to develop sanitation system in the riverside settlements and to achieve sustainability.

  2. International pressure vessels and piping codes and standards. Volume 2: Current perspectives; PVP-Volume 313-2

    International Nuclear Information System (INIS)

    Rao, K.R.; Asada, Yasuhide; Adams, T.M.

    1995-01-01

    The topics in this volume include: (1) Recent or imminent changes to Section 3 design sections; (2) Select perspectives of ASME Codes -- Section 3; (3) Select perspectives of Boiler and Pressure Vessel Codes -- an international outlook; (4) Select perspectives of Boiler and Pressure Vessel Codes -- ASME Code Sections 3, 8 and 11; (5) Codes and Standards Perspectives for Analysis; (6) Selected design perspectives on flow-accelerated corrosion and pressure vessel design and qualification; (7) Select Codes and Standards perspectives for design and operability; (8) Codes and Standards perspectives for operability; (9) What's new in the ASME Boiler and Pressure Vessel Code?; (10) A look at ongoing activities of ASME Sections 2 and 3; (11) A look at current activities of ASME Section 11; (12) A look at current activities of ASME Codes and Standards; (13) Simplified design methodology and design allowable stresses -- 1 and 2; (14) Introduction to Power Boilers, Section 1 of the ASME Code -- Part 1 and 2. Separate abstracts were prepared for most of the individual papers

  3. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    Energy Technology Data Exchange (ETDEWEB)

    Kljenak, Ivo, E-mail: ivo.kljenak@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Kuznetsov, Mikhail, E-mail: mike.kuznetsov@kit.edu [Karlsruhe Institute of Technology, Kaiserstraße 12, 76131 Karlsruhe (Germany); Kostka, Pal, E-mail: kostka@nubiki.hu [NUBIKI Nuclear Safety Research Institute, Konkoly-Thege Miklós út 29-33, 1121 Budapest (Hungary); Kubišova, Lubica, E-mail: lubica.kubisova@ujd.gov.sk [Nuclear Regulatory Authority of the Slovak Republic, Bajkalská 27, 82007 Bratislava (Slovakia); Maltsev, Mikhail, E-mail: maltsev_MB@aep.ru [JSC Atomenergoproekt, 1, st. Podolskykh Kursantov, Moscow (Russian Federation); Manzini, Giovanni, E-mail: giovanni.manzini@rse-web.it [Ricerca sul Sistema Energetico, Via Rubattino 54, 20134 Milano (Italy); Povilaitis, Mantas, E-mail: mantas.p@mail.lei.lt [Lithuania Energy Institute, Breslaujos g.3, 44403 Kaunas (Lithuania)

    2015-03-15

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description.

  4. Simulation of hydrogen deflagration experiment – Benchmark exercise with lumped-parameter codes

    International Nuclear Information System (INIS)

    Kljenak, Ivo; Kuznetsov, Mikhail; Kostka, Pal; Kubišova, Lubica; Maltsev, Mikhail; Manzini, Giovanni; Povilaitis, Mantas

    2015-01-01

    Highlights: • Blind and open simulations of hydrogen combustion experiment in large-scale containment-like facility with different lumped-parameter codes. • Simulation of axial as well as radial flame propagation. • Confirmation of adequacy of lumped-parameter codes for safety analyses of actual nuclear power plants. - Abstract: An experiment on hydrogen deflagration (Upward Flame Propagation Experiment – UFPE) was proposed by the Jozef Stefan Institute (Slovenia) and performed in the HYKA A2 facility at the Karlsruhe Institute of Technology (Germany). The experimental results were used to organize a benchmark exercise for lumped-parameter codes. Six organizations (JSI, AEP, LEI, NUBIKI, RSE and UJD SR) participated in the benchmark exercise, using altogether four different computer codes: ANGAR, ASTEC, COCOSYS and ECART. Both blind and open simulations were performed. In general, all the codes provided satisfactory results of the pressure increase, whereas the results of the temperature show a wider dispersal. Concerning the flame axial and radial velocities, the results may be considered satisfactory, given the inherent simplification of the lumped-parameter description compared to the local instantaneous description

  5. Introduction into scientific work methods-a necessity when performance-based codes are introduced

    DEFF Research Database (Denmark)

    Dederichs, Anne; Sørensen, Lars Schiøtt

    The introduction of performance-based codes in Denmark in 2004 requires new competences from people working with different aspects of fire safety in the industry and the public sector. This abstract presents an attempt in reducing problems with handling and analysing the mathematical methods...... and CFD models when applying performance-based codes. This is done within the educational program "Master of Fire Safety Engineering" at the department of Civil Engineering at the Technical University of Denmark. It was found that the students had general problems with academic methods. Therefore, a new...

  6. Development of three dimensional transient analysis code STTA for SCWR core

    International Nuclear Information System (INIS)

    Wang, Lianjie; Zhao, Wenbo; Chen, Bingde; Yao, Dong; Yang, Ping

    2015-01-01

    Highlights: • A coupled three dimensional neutronics/thermal-hydraulics code STTA is developed for SCWR core transient analysis. • The Dynamic Link Libraries method is adopted for coupling computation for SCWR multi-flow core transient analysis. • The NEACRP-L-335 PWR benchmark problems are studied to verify STTA. • The SCWR rod ejection problems are studied to verify STTA. • STTA meets what is expected from a code for SCWR core 3-D transient preliminary analysis. - Abstract: A coupled three dimensional neutronics/thermal-hydraulics code STTA (SCWR Three dimensional Transient Analysis code) is developed for SCWR core transient analysis. Nodal Green’s Function Method based on the second boundary condition (NGFMN-K) is used for solving transient neutron diffusion equation. The SCWR sub-channel code ATHAS is integrated into NGFMN-K through the serial integration coupling approach. The NEACRP-L-335 PWR benchmark problem and SCWR rod ejection problems are studied to verify STTA. Numerical results show that the PWR solution of STTA agrees well with reference solutions and the SCWR solution is reasonable. The coupled code can be well applied to the core transients and accidents analysis with 3-D core model during both subcritical pressure and supercritical pressure operation

  7. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  8. SCANAIR: A transient fuel performance code

    International Nuclear Information System (INIS)

    Moal, Alain; Georgenthum, Vincent; Marchand, Olivier

    2014-01-01

    Highlights: • Since the early 1990s, the code SCANAIR is developed at IRSN. • The software focuses on studying fast transients such as RIA in light water reactors. • The fuel rod modelling is based on a 1.5D approach. • Thermal and thermal-hydraulics, mechanical and gas behaviour resolutions are coupled. • The code is used for safety assessment and integral tests analysis. - Abstract: Since the early 1990s, the French “Institut de Radioprotection et de Sûreté Nucléaire” (IRSN) has developed the SCANAIR computer code with the view to analysing pressurised water reactor (PWR) safety. This software specifically focuses on studying fast transients such as reactivity-initiated accidents (RIA) caused by possible ejection of control rods. The code aims at improving the global understanding of the physical mechanisms governing the thermal-mechanical behaviour of a single rod. It is currently used to analyse integral tests performed in CABRI and NSRR experimental reactors. The resulting validated code is used to carry out studies required to evaluate margins in relation to criteria for different types of fuel rods used in nuclear power plants. Because phenomena occurring during fast power transients are complex, the simulation in SCANAIR is based on a close coupling between several modules aimed at modelling thermal, thermal-hydraulics, mechanical and gas behaviour. During the first stage of fast power transients, clad deformation is mainly governed by the pellet–clad mechanical interaction (PCMI). At the later stage, heat transfers from pellet to clad bring the cladding material to such high temperatures that the boiling crisis might occurs. The significant over-pressurisation of the rod and the fact of maintaining the cladding material at elevated temperatures during a fairly long period can lead to ballooning and possible clad failure. A brief introduction describes the context, the historical background and recalls the main phenomena involved under

  9. SCANAIR: A transient fuel performance code

    Energy Technology Data Exchange (ETDEWEB)

    Moal, Alain, E-mail: alain.moal@irsn.fr; Georgenthum, Vincent; Marchand, Olivier

    2014-12-15

    Highlights: • Since the early 1990s, the code SCANAIR is developed at IRSN. • The software focuses on studying fast transients such as RIA in light water reactors. • The fuel rod modelling is based on a 1.5D approach. • Thermal and thermal-hydraulics, mechanical and gas behaviour resolutions are coupled. • The code is used for safety assessment and integral tests analysis. - Abstract: Since the early 1990s, the French “Institut de Radioprotection et de Sûreté Nucléaire” (IRSN) has developed the SCANAIR computer code with the view to analysing pressurised water reactor (PWR) safety. This software specifically focuses on studying fast transients such as reactivity-initiated accidents (RIA) caused by possible ejection of control rods. The code aims at improving the global understanding of the physical mechanisms governing the thermal-mechanical behaviour of a single rod. It is currently used to analyse integral tests performed in CABRI and NSRR experimental reactors. The resulting validated code is used to carry out studies required to evaluate margins in relation to criteria for different types of fuel rods used in nuclear power plants. Because phenomena occurring during fast power transients are complex, the simulation in SCANAIR is based on a close coupling between several modules aimed at modelling thermal, thermal-hydraulics, mechanical and gas behaviour. During the first stage of fast power transients, clad deformation is mainly governed by the pellet–clad mechanical interaction (PCMI). At the later stage, heat transfers from pellet to clad bring the cladding material to such high temperatures that the boiling crisis might occurs. The significant over-pressurisation of the rod and the fact of maintaining the cladding material at elevated temperatures during a fairly long period can lead to ballooning and possible clad failure. A brief introduction describes the context, the historical background and recalls the main phenomena involved under

  10. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  11. Construction and Iterative Decoding of LDPC Codes Over Rings for Phase-Noisy Channels

    Directory of Open Access Journals (Sweden)

    Karuppasami Sridhar

    2008-01-01

    Full Text Available Abstract This paper presents the construction and iterative decoding of low-density parity-check (LDPC codes for channels affected by phase noise. The LDPC code is based on integer rings and designed to converge under phase-noisy channels. We assume that phase variations are small over short blocks of adjacent symbols. A part of the constructed code is inherently built with this knowledge and hence able to withstand a phase rotation of radians, where " " is the number of phase symmetries in the signal set, that occur at different observation intervals. Another part of the code estimates the phase ambiguity present in every observation interval. The code makes use of simple blind or turbo phase estimators to provide phase estimates over every observation interval. We propose an iterative decoding schedule to apply the sum-product algorithm (SPA on the factor graph of the code for its convergence. To illustrate the new method, we present the performance results of an LDPC code constructed over with quadrature phase shift keying (QPSK modulated signals transmitted over a static channel, but affected by phase noise, which is modeled by the Wiener (random-walk process. The results show that the code can withstand phase noise of standard deviation per symbol with small loss.

  12. Web interface for plasma analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: emo@nifs.ac.jp; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)

    2008-04-15

    There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.

  13. Web interface for plasma analysis codes

    International Nuclear Information System (INIS)

    Emoto, M.; Murakami, S.; Yoshida, M.; Funaba, H.; Nagayama, Y.

    2008-01-01

    There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach

  14. Development and application of a system analysis code for liquid fueled molten salt reactors based on RELAP5 code

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Chengbin [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Cheng, Maosong, E-mail: mscheng@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Liu, Guimin [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China)

    2016-08-15

    Highlights: • New point kinetics and thermo-hydraulics models as well as a numerical method are added into RELAP5 code to be suitable for liquid fueled molten salt reactor. • The extended REALP5 code is verified by the experimental benchmarks of MSRE. • The different transient scenarios of the MSBR are simulated to evaluate performance during the transients. - Abstract: The molten salt reactor (MSR) is one of the six advanced reactor concepts declared by the Generation IV International Forum (GIF), which can be characterized by attractive attributes as inherent safety, economical efficiency, natural resource protection, sustainable development and nuclear non-proliferation. It is important to make system safety analysis for nuclear power plant of MSR. In this paper, in order to developing a system analysis code suitable for liquid fueled molten salt reactors, the point kinetics and thermo-hydraulic models as well as the numerical method in thermal–hydraulic transient code Reactor Excursion and Leak Analysis Program (RELAP5) developed at the Idaho National Engineering Laboratory (INEL) for the U.S. Nuclear Regulatory Commission (NRC) are extended and verified by Molten Salt Reactor Experiment (MSRE) experimental benchmarks. And then, four transient scenarios including the load demand change, the primary flow transient, the secondary flow transient and the reactivity transient of the Molten Salt Breeder Reactor (MSBR) are modeled and simulated so as to evaluate the performance of the reactor during the anticipated transient events using the extended RELAP5 code. The results indicate the extended RELAP5 code is effective and well suited to the liquid fueled molten salt reactor, and the MSBR has strong inherent safety characteristics because of its large negative reactivity coefficient. In the future, the extended RELAP5 code will be used to perform transient safety analysis for a liquid fueled thorium molten salt reactor named TMSR-LF developed by the Center

  15. Development and application of a system analysis code for liquid fueled molten salt reactors based on RELAP5 code

    International Nuclear Information System (INIS)

    Shi, Chengbin; Cheng, Maosong; Liu, Guimin

    2016-01-01

    Highlights: • New point kinetics and thermo-hydraulics models as well as a numerical method are added into RELAP5 code to be suitable for liquid fueled molten salt reactor. • The extended REALP5 code is verified by the experimental benchmarks of MSRE. • The different transient scenarios of the MSBR are simulated to evaluate performance during the transients. - Abstract: The molten salt reactor (MSR) is one of the six advanced reactor concepts declared by the Generation IV International Forum (GIF), which can be characterized by attractive attributes as inherent safety, economical efficiency, natural resource protection, sustainable development and nuclear non-proliferation. It is important to make system safety analysis for nuclear power plant of MSR. In this paper, in order to developing a system analysis code suitable for liquid fueled molten salt reactors, the point kinetics and thermo-hydraulic models as well as the numerical method in thermal–hydraulic transient code Reactor Excursion and Leak Analysis Program (RELAP5) developed at the Idaho National Engineering Laboratory (INEL) for the U.S. Nuclear Regulatory Commission (NRC) are extended and verified by Molten Salt Reactor Experiment (MSRE) experimental benchmarks. And then, four transient scenarios including the load demand change, the primary flow transient, the secondary flow transient and the reactivity transient of the Molten Salt Breeder Reactor (MSBR) are modeled and simulated so as to evaluate the performance of the reactor during the anticipated transient events using the extended RELAP5 code. The results indicate the extended RELAP5 code is effective and well suited to the liquid fueled molten salt reactor, and the MSBR has strong inherent safety characteristics because of its large negative reactivity coefficient. In the future, the extended RELAP5 code will be used to perform transient safety analysis for a liquid fueled thorium molten salt reactor named TMSR-LF developed by the Center

  16. An XML Approach of Coding a Morphological Database for Arabic Language

    Directory of Open Access Journals (Sweden)

    Mourad Gridach

    2011-01-01

    Full Text Available We present an XML approach for the production of an Arabic morphological database for Arabic language that will be used in morphological analysis for modern standard Arabic (MSA. Optimizing the production, maintenance, and extension of morphological database is one of the crucial aspects impacting natural language processing (NLP. For Arabic language, producing a morphological database is not an easy task, because this it has some particularities such as the phenomena of agglutination and a lot of morphological ambiguity phenomenon. The method presented can be exploited by NLP applications such as syntactic analysis, semantic analysis, information retrieval, and orthographical correction.

  17. The abstract geometry modeling language (AgML): experience and road map toward eRHIC

    International Nuclear Information System (INIS)

    Webb, Jason; Lauret, Jerome; Perevoztchikov, Victor

    2014-01-01

    The STAR experiment has adopted an Abstract Geometry Modeling Language (AgML) as the primary description of our geometry model. AgML establishes a level of abstraction, decoupling the definition of the detector from the software libraries used to create the concrete geometry model. Thus, AgML allows us to support both our legacy GEANT 3 simulation application and our ROOT/TGeo based reconstruction software from a single source, which is demonstrably self- consistent. While AgML was developed primarily as a tool to migrate away from our legacy FORTRAN-era geometry codes, it also provides a rich syntax geared towards the rapid development of detector models. AgML has been successfully employed by users to quickly develop and integrate the descriptions of several new detectors in the RHIC/STAR experiment including the Forward GEM Tracker (FGT) and Heavy Flavor Tracker (HFT) upgrades installed in STAR for the 2012 and 2013 runs. AgML has furthermore been heavily utilized to study future upgrades to the STAR detector as it prepares for the eRHIC era. With its track record of practical use in a live experiment in mind, we present the status, lessons learned and future of the AgML language as well as our experience in bringing the code into our production and development environments. We will discuss the path toward eRHIC and pushing the current model to accommodate for detector miss-alignment and high precision physics.

  18. ORTHOGRAPHIC PREVENTION IN PRIMARY SCHOOL: LACKS AND PERSPECTIVES / LA PREVENCIÓN ORTOGRÁFICA EN LA EDUCACIÓN PRIMARIA: CARENCIAS Y PERSPECTIVAS

    Directory of Open Access Journals (Sweden)

    Francisca Rosa Cedeño Gamboa

    2013-12-01

    Full Text Available In the article it is presented a variant to approach the orthographic prevention as a need in developing abilities. With shies proposal favours the use of the written language in a rational way, at the moment it turns into an instrument of mental analysis, as a support and a guide of thought. It allows showing and defining the graphic and auditory representations in a correct way, after the meticulous selection of the meanings. RESUMEN En el artículo se ofrece una propuesta para trabajar la prevención ortográfica como una necesidad para formar las habilidades. Con ella se favorece la utilización del lenguaje escrito de forma racional, en el acto de conversión de instrumento de análisis mental, como soporte y guía del pensamiento. La prevención permite exteriorizar y delimitar las representaciones gráficas y auditivas de forma correcta, después de la minuciosa selección de los significados.

  19. POURQUOI L’ORTHOGRAPHE FRANÇAISE MET-ELLE À RUDE ÉPREUVE LES APPRENANTS ALGÉRIENS ? / WHY DOES FRENCH ORTOGRAPHY RAISE SO MANY DIFFICULTIES TO ALGERIAN PUPILS? / DE CE ORTOGRAFIA FRANCEZĂ PUNE LA GREA ÎNCERCARE ELEVII ALGERIENI ?

    Directory of Open Access Journals (Sweden)

    Rima Redouane

    2016-11-01

    Full Text Available Pédagogues, didacticiens et linguistes s’accordent sur le fait que les apprenants algériens éprouvent de grandes difficultés en matière d’orthographe française, c’est pourquoi nous nous sommes proposés de déceler les sources de ces difficultés. Notre ambition était d’améliorer les compétences orthographiques de ces apprenants et, par ricochet, la qualité de leurs écrits.

  20. Abstraction and art.

    Science.gov (United States)

    Gortais, Bernard

    2003-07-29

    In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.

  1. Completeness of Lyapunov Abstraction

    DEFF Research Database (Denmark)

    Wisniewski, Rafal; Sloth, Christoffer

    2013-01-01

    the vector field, which allows the generation of a complete abstraction. To compute the functions that define the subdivision of the state space in an algorithm, we formulate a sum of squares optimization problem. This optimization problem finds the best subdivisioning functions, with respect to the ability......This paper addresses the generation of complete abstractions of polynomial dynamical systems by timed automata. For the proposed abstraction, the state space is divided into cells by sublevel sets of functions. We identify a relation between these functions and their directional derivatives along...

  2. Metaphor: Bridging embodiment to abstraction.

    Science.gov (United States)

    Jamrozik, Anja; McQuire, Marguerite; Cardillo, Eileen R; Chatterjee, Anjan

    2016-08-01

    Embodied cognition accounts posit that concepts are grounded in our sensory and motor systems. An important challenge for these accounts is explaining how abstract concepts, which do not directly call upon sensory or motor information, can be informed by experience. We propose that metaphor is one important vehicle guiding the development and use of abstract concepts. Metaphors allow us to draw on concrete, familiar domains to acquire and reason about abstract concepts. Additionally, repeated metaphoric use drawing on particular aspects of concrete experience can result in the development of new abstract representations. These abstractions, which are derived from embodied experience but lack much of the sensorimotor information associated with it, can then be flexibly applied to understand new situations.

  3. Technical abstracts: Mechanical engineering, 1990

    International Nuclear Information System (INIS)

    Broesius, J.Y.

    1991-01-01

    This document is a compilation of the published, unclassified abstracts produced by mechanical engineers at Lawrence Livermore National Laboratory (LLNL) during the calendar year 1990. Many abstracts summarize work completed and published in report form. These are UCRL-JC series documents, which include the full text of articles to be published in journals and of papers to be presented at meetings, and UCID reports, which are informal documents. Not all UCIDs contain abstracts: short summaries were generated when abstracts were not included. Technical Abstracts also provides descriptions of those documents assigned to the UCRL-MI (miscellaneous) category. These are generally viewgraphs or photographs presented at meetings. An author index is provided at the back of this volume for cross referencing

  4. Impact of the Revised Malaysian Code on Corporate Governance on Audit Committee Attributes and Firm Performance

    OpenAIRE

    KALLAMU, Basiru Salisu

    2016-01-01

    Abstract. Using a sample of 37 finance companies listed under the finance segment of Bursa Malaysia, we examined the impact of the revision to Malaysian code on corporate governance on audit committee attributes and firm performance. Our result suggests that audit committee attributes significantly improved after the Code was revised. In addition, the coefficient for audit committee and risk committee interlock has a significant negative relationship with Tobin’s Q in the period before the re...

  5. The triconnected abstraction of process models

    OpenAIRE

    Polyvyanyy, Artem; Smirnov, Sergey; Weske, Mathias

    2009-01-01

    Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions

  6. A Modal-Logic Based Graph Abstraction

    NARCIS (Netherlands)

    Bauer, J.; Boneva, I.B.; Kurban, M.E.; Rensink, Arend; Ehrig, H; Heckel, R.; Rozenberg, G.; Taentzer, G.

    2008-01-01

    Infinite or very large state spaces often prohibit the successful verification of graph transformation systems. Abstract graph transformation is an approach that tackles this problem by abstracting graphs to abstract graphs of bounded size and by lifting application of productions to abstract

  7. Comparison of elevated temperature design codes of ASME Subsection NH and RCC-MRx

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyeong-Yeon, E-mail: hylee@kaeri.re.kr

    2016-11-15

    Highlights: • Comparison of elevated temperature design (ETD) codes was made. • Material properties and evaluation procedures were compared. • Two heat-resistant materials of Grade 91 steel and austenitic stainless steel 316 are the target materials in the present study. • Application of the ETD codes to Generation IV reactor components and a comparison of the conservatism was conducted. - Abstract: The elevated temperature design (ETD) codes are used for the design evaluation of Generation IV (Gen IV) reactor systems such as sodium-cooled fast reactor (SFR), lead-cooled fast reactor (LFR), and very high temperature reactor (VHTR). In the present study, ETD code comparisons were made in terms of the material properties and design evaluation procedures for the recent versions of the two major ETD codes, ASME Section III Subsection NH and RCC-MRx. Conservatism in the design evaluation procedures was quantified and compared based on the evaluation results for SFR components as per the two ETD codes. The target materials are austenitic stainless steel 316 and Mod.9Cr-1Mo steel, which are the major two materials in a Gen IV SFR. The differences in the design evaluation procedures as well as the material properties in the two ETD codes are highlighted.

  8. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  9. Seismic Consequence Abstraction

    International Nuclear Information System (INIS)

    Gross, M.

    2004-01-01

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274])

  10. Seismic Consequence Abstraction

    Energy Technology Data Exchange (ETDEWEB)

    M. Gross

    2004-10-25

    The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).

  11. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  12. Modal abstractions of concurrent behavior

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nanz, Sebastian; Nielson, Hanne Riis

    2011-01-01

    We present an effective algorithm for the automatic construction of finite modal transition systems as abstractions of potentially infinite concurrent processes. Modal transition systems are recognized as valuable abstractions for model checking because they allow for the validation as well...... as refutation of safety and liveness properties. However, the algorithmic construction of finite abstractions from potentially infinite concurrent processes is a missing link that prevents their more widespread usage for model checking of concurrent systems. Our algorithm is a worklist algorithm using concepts...... from abstract interpretation and operating upon mappings from sets to intervals in order to express simultaneous over- and underapprox-imations of the multisets of process actions available in a particular state. We obtain a finite abstraction that is 3-valued in both states and transitions...

  13. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  14. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  15. The deleuzian abstract machines

    DEFF Research Database (Denmark)

    Werner Petersen, Erik

    2005-01-01

    To most people the concept of abstract machines is connected to the name of Alan Turing and the development of the modern computer. The Turing machine is universal, axiomatic and symbolic (E.g. operating on symbols). Inspired by Foucault, Deleuze and Guattari extended the concept of abstract...

  16. Typesafe Abstractions for Tensor Operations

    OpenAIRE

    Chen, Tongfei

    2017-01-01

    We propose a typesafe abstraction to tensors (i.e. multidimensional arrays) exploiting the type-level programming capabilities of Scala through heterogeneous lists (HList), and showcase typesafe abstractions of common tensor operations and various neural layers such as convolution or recurrent neural networks. This abstraction could lay the foundation of future typesafe deep learning frameworks that runs on Scala/JVM.

  17. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  18. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  19. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  20. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  1. Inventory Abstraction

    International Nuclear Information System (INIS)

    Leigh, C.

    2000-01-01

    The purpose of the inventory abstraction as directed by the development plan (CRWMS M and O 1999b) is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M and O 1999c, 1999d). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) (NRC 1999) key technical issue (KTI): ''The rate at which radionuclides in SNF [Spent Nuclear Fuel] are released from the EBS [Engineered Barrier System] through the oxidation and dissolution of spent fuel'' (Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release to the accessible environment. The inventory abstraction is important in assessing system performance because

  2. Safety analysis code SCTRAN development for SCWR and its application to CGNPC SCWR

    International Nuclear Information System (INIS)

    Wu, Pan; Gou, Junli; Shan, Jianqiang; Jiang, Yang; Yang, Jue; Zhang, Bo

    2013-01-01

    Highlights: ► A new safety analysis code named SCTRAN is developed for SCWRs. ► Capability of SCTRAN is verified by comparing with code APROS and RELAP5-3D. ► A new passive safety system is proposed for CGNPC SCWR and analyzed with SCTRAN. ► CGNPC SCWR is able to cope with two critical accidents for SCWRs, LOFA and LOCA. - Abstract: Design analysis is one of the main difficulties during the research and design of SCWRs. Currently, the development of safety analysis code for SCWR is still in its infancy all around the world, and very few computer codes could carry out the trans-critical calculations where significant changes in water properties would take place. In this paper, a safety analysis code SCTRAN for SCWRs has been developed based on code RETRAN-02, the best estimate code used for safety analysis of light water reactors. The ability of SCTRAN code to simulate transients where both supercritical and subcritical regimes are encountered has been verified by comparing with APROS and RELAP5-3D codes. Furthermore, the LOFA and LOCA transients for the CGNPC SCWR design were analyzed with SCTRAN code. The characteristics and performance of the passive safety systems applied to CGNPC SCWR were evaluated. The results show that: (1) The SCTRAN computer code developed in this study is capable to perform design analysis for SCWRs; (2) During LOFA and LOCA accidents in a CGNPC SCWR, the passive safety systems would significantly mitigate the consequences of these transients and enhance the inherent safety

  3. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  4. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  5. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications

    Science.gov (United States)

    OKeefe, Matthew (Editor); Kerr, Christopher L. (Editor)

    1998-01-01

    This report contains the abstracts and technical papers from the Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications, held June 15-18, 1998, in Scottsdale, Arizona. The purpose of the workshop is to bring together software developers in meteorology and oceanography to discuss software engineering and code design issues for parallel architectures, including Massively Parallel Processors (MPP's), Parallel Vector Processors (PVP's), Symmetric Multi-Processors (SMP's), Distributed Shared Memory (DSM) multi-processors, and clusters. Issues to be discussed include: (1) code architectures for current parallel models, including basic data structures, storage allocation, variable naming conventions, coding rules and styles, i/o and pre/post-processing of data; (2) designing modular code; (3) load balancing and domain decomposition; (4) techniques that exploit parallelism efficiently yet hide the machine-related details from the programmer; (5) tools for making the programmer more productive; and (6) the proliferation of programming models (F--, OpenMP, MPI, and HPF).

  6. Characterisation of metal combustion with DUST code

    Energy Technology Data Exchange (ETDEWEB)

    García-Cascales, José R., E-mail: jr.garcia@upct.es [DITF, ETSII, Universidad Politécnica de Cartagena, Dr Fleming s/n, 30202 Murcia (Spain); Velasco, F.J.S. [Centro Universitario de la Defensa de San Javier, MDE-UPCT, C/Coronel Lopez Peña s/n, 30730 Murcia (Spain); Otón-Martínez, Ramón A.; Espín-Tolosa, S. [DITF, ETSII, Universidad Politécnica de Cartagena, Dr Fleming s/n, 30202 Murcia (Spain); Bentaib, Ahmed; Meynet, Nicolas; Bleyer, Alexandre [Institut de Radioprotection et Sûreté Nucléaire, BP 17, 92260 Fontenay-aux-Roses (France)

    2015-10-15

    Highlights: • This paper is part of the work carried out by researchers of the Technical University of Cartagena, Spain and the Institute of Radioprotection and Nuclear Security of France. • We have developed a code for the study of mobilisation and combustion that we have called DUST by using CAST3M, a multipurpose software for studying many different problems of Mechanical Engineering. • In this paper, we present the model implemented in the code to characterise metal combustion which describes the combustion model, the kinetic reaction rates adopted and includes a first comparison between experimental data and calculated ones. • The results are quite promising although suggest that improvement must be made on the kinetic of the reaction taking place. - Abstract: The code DUST is a CFD code developed by the Technical University of Cartagena, Spain and the Institute of Radioprotection and Nuclear Security, France (IRSN) with the objective to assess the dust explosion hazard in the vacuum vessel of ITER. Thus, DUST code permits the analysis of dust spatial distribution, remobilisation and entrainment, explosion, and combustion. Some assumptions such as particle incompressibility and negligible effect of pressure on the solid phase make the model quite appealing from the mathematical point of view, as the systems of equations that characterise the behaviour of the solid and gaseous phases are decoupled. The objective of this work is to present the model implemented in the code to characterise metal combustion. In order to evaluate its ability analysing reactive mixtures of multicomponent gases and multicomponent solids, two combustion problems are studied, namely H{sub 2}/N{sub 2}/O{sub 2}/C and H{sub 2}/N{sub 2}/O{sub 2}/W mixtures. The system of equations considered and finite volume approach are briefly presented. The closure relationships used are commented and special attention is paid to the reaction rate correlations used in the model. The numerical

  7. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  8. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.

    Science.gov (United States)

    Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-03-31

    As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the

  9. Towards Analysis-Driven Scientific Software Architecture: The Case for Abstract Data Type Calculus

    Directory of Open Access Journals (Sweden)

    Damian W.I. Rouson

    2008-01-01

    Full Text Available This article approaches scientific software architecture from three analytical paths. Each path examines discrete time advancement of multiphysics phenomena governed by coupled differential equations. The new object-oriented Fortran 2003 constructs provide a formal syntax for an abstract data type (ADT calculus. The first analysis uses traditional object-oriented software design metrics to demonstrate the high cohesion and low coupling associated with the calculus. A second analysis from the viewpoint of computational complexity theory demonstrates that a more representative bug search strategy than that considered by Rouson et al. (ACM Trans. Math. Soft. 34(1 (2008 reduces the number of lines searched in a code with λ total lines from O(λ2 to O(λ log2 λ , which in turn becomes nearly independent of the overall code size in the context of ADT calculus. The third analysis derives from information theory an argument that ADT calculus simplifies developer communications in part by minimizing the growth in interface information content as developers add new physics to a multiphysics package.

  10. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    Science.gov (United States)

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  11. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  12. Nuclear medicine. Abstracts; Nuklearmedizin 2000. Abstracts

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2000-07-01

    This issue of the journal contains the abstracts of the 183 conference papers as well as 266 posters presented at the conference. Subject fields covered are: Neurology, psychology, oncology, pediatrics, radiopharmacy, endocrinology, EDP, measuring equipment and methods, radiological protection, cardiology, and therapy. (orig./CB) [German] Die vorliegende Zeitschrift enthaelt die Kurzfassungen der 183 auf der Tagung gehaltenen Vortraege sowie der 226 praesentierten Poster, die sich mit den folgenden Themen befassten: Neurologie, Psychiatrie, Onkologie, Paediatrie, Radiopharmazie, Endokrinologie, EDV, Messtechnik, Strahlenschutz, Kardiologie sowie Therapie. (MG)

  13. Mechanical Engineering Department technical abstracts

    International Nuclear Information System (INIS)

    Denney, R.M.

    1982-01-01

    The Mechanical Engineering Department publishes listings of technical abstracts twice a year to inform readers of the broad range of technical activities in the Department, and to promote an exchange of ideas. Details of the work covered by an abstract may be obtained by contacting the author(s). Overall information about current activities of each of the Department's seven divisions precedes the technical abstracts

  14. Code-To-Code Benchmarking Of The Porflow And GoldSim Contaminant Transport Models Using A Simple 1-D Domain - 11191

    International Nuclear Information System (INIS)

    Hiergesell, R.; Taylor, G.

    2010-01-01

    An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one

  15. Nuclear science references coding manual

    International Nuclear Information System (INIS)

    Ramavataram, S.; Dunford, C.L.

    1996-08-01

    This manual is intended as a guide to Nuclear Science References (NSR) compilers. The basic conventions followed at the National Nuclear Data Center (NNDC), which are compatible with the maintenance and updating of and retrieval from the Nuclear Science References (NSR) file, are outlined. In Section H, the structure of the NSR file such as the valid record identifiers, record contents, text fields as well as the major TOPICS for which are prepared are enumerated. Relevant comments regarding a new entry into the NSR file, assignment of , generation of and linkage characteristics are also given in Section II. In Section III, a brief definition of the Keyword abstract is given followed by specific examples; for each TOPIC, the criteria for inclusion of an article as an entry into the NSR file as well as coding procedures are described. Authors preparing Keyword abstracts either to be published in a Journal (e.g., Nucl. Phys. A) or to be sent directly to NNDC (e.g., Phys. Rev. C) should follow the illustrations in Section III. The scope of the literature covered at the NNDC, the categorization into Primary and Secondary sources, etc., is discussed in Section IV. Useful information regarding permitted character sets, recommended abbreviations, etc., is given under Section V as Appendices

  16. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    International Nuclear Information System (INIS)

    Nava-Dominguez, A.; Rao, Y.F.; Waddington, G.M.

    2014-01-01

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles

  17. Letter position coding across modalities: the case of Braille readers.

    Science.gov (United States)

    Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo

    2012-01-01

    The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.

  18. Assessment of subchannel code ASSERT-PV for flow-distribution predictions

    Energy Technology Data Exchange (ETDEWEB)

    Nava-Dominguez, A., E-mail: navadoma@aecl.ca; Rao, Y.F., E-mail: raoy@aecl.ca; Waddington, G.M., E-mail: waddingg@aecl.ca

    2014-08-15

    Highlights: • Assessment of the subchannel code ASSERT-PV 3.2 for the prediction of flow distribution. • Open literature and in-house experimental data to quantify ASSERT-PV predictions. • Model changes assessed against vertical and horizontal flow experiments. • Improvement of flow-distribution predictions under CANDU-relevant conditions. - Abstract: This paper reports an assessment of the recently released subchannel code ASSERT-PV 3.2 for the prediction of flow-distribution in fuel bundles, including subchannel void fraction, quality and mass fluxes. Experimental data from open literature and from in-house tests are used to assess the flow-distribution models in ASSERT-PV 3.2. The prediction statistics using the recommended model set of ASSERT-PV 3.2 are compared to those from previous code versions. Separate-effects sensitivity studies are performed to quantify the contribution of each flow-distribution model change or enhancement to the improvement in flow-distribution prediction. The assessment demonstrates significant improvement in the prediction of flow-distribution in horizontal fuel channels containing CANDU bundles.

  19. Constraint-Based Abstract Semantics for Temporal Logic

    DEFF Research Database (Denmark)

    Banda, Gourinath; Gallagher, John Patrick

    2010-01-01

    Abstract interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal mu-calculus, which is the basis for abstract model checking. The abstract semantic funct...

  20. EBS Radionuclide Transport Abstraction

    International Nuclear Information System (INIS)

    Schreiner, R.

    2001-01-01

    The purpose of this work is to develop the Engineered Barrier System (EBS) radionuclide transport abstraction model, as directed by a written development plan (CRWMS M and O 1999a). This abstraction is the conceptual model that will be used to determine the rate of release of radionuclides from the EBS to the unsaturated zone (UZ) in the total system performance assessment-license application (TSPA-LA). In particular, this model will be used to quantify the time-dependent radionuclide releases from a failed waste package (WP) and their subsequent transport through the EBS to the emplacement drift wall/UZ interface. The development of this conceptual model will allow Performance Assessment Operations (PAO) and its Engineered Barrier Performance Department to provide a more detailed and complete EBS flow and transport abstraction. The results from this conceptual model will allow PA0 to address portions of the key technical issues (KTIs) presented in three NRC Issue Resolution Status Reports (IRSRs): (1) the Evolution of the Near-Field Environment (ENFE), Revision 2 (NRC 1999a), (2) the Container Life and Source Term (CLST), Revision 2 (NRC 1999b), and (3) the Thermal Effects on Flow (TEF), Revision 1 (NRC 1998). The conceptual model for flow and transport in the EBS will be referred to as the ''EBS RT Abstraction'' in this analysis/modeling report (AMR). The scope of this abstraction and report is limited to flow and transport processes. More specifically, this AMR does not discuss elements of the TSPA-SR and TSPA-LA that relate to the EBS but are discussed in other AMRs. These elements include corrosion processes, radionuclide solubility limits, waste form dissolution rates and concentrations of colloidal particles that are generally represented as boundary conditions or input parameters for the EBS RT Abstraction. In effect, this AMR provides the algorithms for transporting radionuclides using the flow geometry and radionuclide concentrations determined by other

  1. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  2. Article Abstract

    African Journals Online (AJOL)

    Abstract. Simple learning tools to improve clinical laboratory practical skills training. B Taye, BSc, MPH. Addis Ababa University, College of Health Sciences, Addis Ababa, ... concerns about the competence of medical laboratory science graduates. ... standardised practical learning guides and assessment checklists would.

  3. Studies on spelling in the context of dyslexia: a literature review

    Directory of Open Access Journals (Sweden)

    Luciana Cidrim

    Full Text Available ABSTRACT This paper aimed at reviewing the literature related to national and international research on spelling difficulties by dyslexics and identifying the intervention approaches performed with regard to this topic. An integrative review of the literature was carried out in order to answer the question: considering the domain of the orthography, one of the challenges frequently faced by dyslexics, how are studies on the relationship between dyslexia and spelling characterized? The research was carried out in PubMed platform, Scopus database and Portal de Periódicos CAPES/MEC. To search the articles, the following descriptors were used: "dislexia" or "dyslexia" with the free terms "ortografia" or "spelling". One aspect should be highlighted: some works indicate that difficulties in the spelling performance by dyslexics are not exclusively due to phonological processing failures - they are also secondary to alterations in orthographic processing. A challenge faced by dyslexics is to retain phonological information to use in writing new orthographic forms. Researchers suggest that intervention strategies include phonological, orthographic and lexical activities. It is observed that few studies have analyzed the difficulties that dyslexics face when dealing with new words, as well as writing, correctly, frequently used words in their own language.

  4. Avaliação de escrita na dislexia do desenvolvimento: tipos de erros ortográficos em prova de nomeação de figuras por escrita Assessment of writing in developmental dyslexia: types of orthographic errors in the written version of a picture naming test

    Directory of Open Access Journals (Sweden)

    Maria José Cicero Oger Affonso

    2011-08-01

    Full Text Available OBJETIVO: avaliar o padrão de resposta de disléxicos em uma tarefa de nomeação de figuras por escrita, por meio da análise dos tipos de erros ortográficos cometidos. MÉTODOS: o desempenho de um grupo de 15 disléxicos foi comparado ao de dois grupos controles, pareados por idade e por nível de leitura. RESULTADOS: os grupos dislexia e controle por leitura não diferiram quanto ao número de acertos, mas ambos acertaram menos que o grupo controle por idade. Com relação aos tipos de erros, foram observadas diferenças significantes com maior número de erros entre disléxicos para erros de correspondência unívoca grafema-fonema, omissão de segmentos e correspondência fonema-grafema independente de regras. CONCLUSÃO: conclui-se que a análise dos erros ortográficos é útil para a compreensão das estratégias utilizadas e dos processos lingüísticos subjacentes às dificuldades de escrita em indivíduos com dislexia.PURPOSE: to evaluate the response pattern of dyslexic subjects in the written version of a picture naming task, by analyzing the types of orthographic errors committed. METHODS: the performance of a group of 15 dyslexics was compared to two control groups matched by age and by reading level. RESULTS: the dyslexic and the reading control groups did not differ on the number of submitted errors, but both committed more errors than the age control group. Regarding the types of orthographic errors submitted, the most frequent errors within the dyslexic individuals were: errors of univocal grapheme-phoneme correspondence, omission of segments and phoneme-grapheme correspondence regardless of any rule. CONCLUSION: it is concluded that analyzing orthographic errors is useful for understanding the used strategies and the linguistic processes underlying the writing difficulties in dyslexic subjects.

  5. Water Pollution Abstracts. Volume 43, Number 4, Abstracts 645-849.

    Science.gov (United States)

    WATER POLLUTION, *ABSTRACTS, PURIFICATION, WASTES(INDUSTRIAL), CONTROL, SEWAGE, WATER SUPPLIES, PUBLIC HEALTH, PETROLEUM PRODUCTS, DEGRADATION, DAMS...ESTUARIES, PLANKTON, PHOTOSYNTHESIS, VIRUSES, SEA WATER , MICROBIOLOGY, UNITED KINGDOM.

  6. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  7. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  8. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  9. An approach to verification and validation of MHD codes for fusion applications

    Energy Technology Data Exchange (ETDEWEB)

    Smolentsev, S., E-mail: sergey@fusion.ucla.edu [University of California, Los Angeles (United States); Badia, S. [Centre Internacional de Mètodes Numèrics en Enginyeria, Barcelona (Spain); Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Bhattacharyay, R. [Institute for Plasma Research, Gandhinagar, Gujarat (India); Bühler, L. [Karlsruhe Institute of Technology (Germany); Chen, L. [University of Chinese Academy of Sciences, Beijing (China); Huang, Q. [Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui (China); Jin, H.-G. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Krasnov, D. [Technische Universität Ilmenau (Germany); Lee, D.-W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Mas de les Valls, E. [Centre Internacional de Mètodes Numèrics en Enginyeria, Barcelona (Spain); Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Mistrangelo, C. [Karlsruhe Institute of Technology (Germany); Munipalli, R. [HyPerComp, Westlake Village (United States); Ni, M.-J. [University of Chinese Academy of Sciences, Beijing (China); Pashkevich, D. [St. Petersburg State Polytechnical University (Russian Federation); Patel, A. [Universitat Politècnica de Catalunya – Barcelona Tech (Spain); Pulugundla, G. [University of California, Los Angeles (United States); Satyamurthy, P. [Bhabha Atomic Research Center (India); Snegirev, A. [St. Petersburg State Polytechnical University (Russian Federation); Sviridov, V. [Moscow Power Engineering Institute (Russian Federation); Swain, P. [Bhabha Atomic Research Center (India); and others

    2015-11-15

    Highlights: • Review of status of MHD codes for fusion applications. • Selection of five benchmark problems. • Guidance for verification and validation of MHD codes for fusion applications. - Abstract: We propose a new activity on verification and validation (V&V) of MHD codes presently employed by the fusion community as a predictive capability tool for liquid metal cooling applications, such as liquid metal blankets. The important steps in the development of MHD codes starting from the 1970s are outlined first and then basic MHD codes, which are currently in use by designers of liquid breeder blankets, are reviewed. A benchmark database of five problems has been proposed to cover a wide range of MHD flows from laminar fully developed to turbulent flows, which are of interest for fusion applications: (A) 2D fully developed laminar steady MHD flow, (B) 3D laminar, steady developing MHD flow in a non-uniform magnetic field, (C) quasi-two-dimensional MHD turbulent flow, (D) 3D turbulent MHD flow, and (E) MHD flow with heat transfer (buoyant convection). Finally, we introduce important details of the proposed activities, such as basic V&V rules and schedule. The main goal of the present paper is to help in establishing an efficient V&V framework and to initiate benchmarking among interested parties. The comparison results computed by the codes against analytical solutions and trusted experimental and numerical data as well as code-to-code comparisons will be presented and analyzed in companion paper/papers.

  10. INVENTORY ABSTRACTION

    International Nuclear Information System (INIS)

    Ragan, G.

    2001-01-01

    The purpose of the inventory abstraction, which has been prepared in accordance with a technical work plan (CRWMS M andO 2000e for/ICN--02 of the present analysis, and BSC 2001e for ICN 03 of the present analysis), is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M andO 2000c, 2000f). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) key technical issue (KTI): ''The rate at which radionuclides in SNF [spent nuclear fuel] are released from the EBS [engineered barrier system] through the oxidation and dissolution of spent fuel'' (NRC 1999, Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release

  11. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  12. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    International Nuclear Information System (INIS)

    Liu, X.J.; Cheng, X.

    2015-01-01

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  13. Sub-channel/system coupled code development and its application to SCWR-FQT loop

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.J., E-mail: xiaojingliu@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dong Chuan Road, Shanghai 200240 (China); Cheng, X. [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Prießnitz-Str. 3, 76131 Karlsruhe (Germany)

    2015-04-15

    Highlights: • A coupled code is developed for SCWR accident simulation. • The feasibility of the code is shown by application to SCWR-FQT loop. • Some measures are selected by sensitivity analysis. • The peak cladding temperature can be reduced effectively by the proposed measures. - Abstract: In the frame of Super-Critical Reactor In Pipe Test Preparation (SCRIPT) project in China, one of the challenge tasks is to predict the transient performance of SuperCritical Water Reactor-Fuel Qualification Test (SCWR-FQT) loop under some accident conditions. Several thermal–hydraulic codes (system code, sub-channel code) are selected to perform the safety analysis. However, the system code cannot simulate the local behavior of the test bundle, and the sub-channel code is incapable of calculating the whole system behavior of the test loop. Therefore, to combine the merits of both codes, and minimizes their shortcomings, a coupled sub-channel and system code system is developed in this paper. Both of the sub-channel code COBRA-SC and system code ATHLET-SC are adapted to transient analysis of SCWR. Two codes are coupled by data transfer and data adaptation at the interface. In the new developed coupled code, the whole system behavior including safety system characteristic is analyzed by system code ATHLET-SC, whereas the local thermal–hydraulic parameters are predicted by the sub-channel code COBRA-SC. The codes are utilized to get the local thermal–hydraulic parameters in the SCWR-FQT fuel bundle under some accident case (e.g. a flow blockage during LOCA). Some measures to mitigate the accident consequence are proposed by the sensitivity study and trialed to demonstrate their effectiveness in the coupled simulation. The results indicate that the new developed code has good feasibility to transient analysis of supercritical water-cooled test. And the peak cladding temperature caused by blockage in the fuel bundle can be reduced effectively by the safety measures

  14. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  15. Knowledge acquisition for temporal abstraction.

    Science.gov (United States)

    Stein, A; Musen, M A; Shahar, Y

    1996-01-01

    Temporal abstraction is the task of detecting relevant patterns in data over time. The knowledge-based temporal-abstraction method uses knowledge about a clinical domain's contexts, external events, and parameters to create meaningful interval-based abstractions from raw time-stamped clinical data. In this paper, we describe the acquisition and maintenance of domain-specific temporal-abstraction knowledge. Using the PROTEGE-II framework, we have designed a graphical tool for acquiring temporal knowledge directly from expert physicians, maintaining the knowledge in a sharable form, and converting the knowledge into a suitable format for use by an appropriate problem-solving method. In initial tests, the tool offered significant gains in our ability to rapidly acquire temporal knowledge and to use that knowledge to perform automated temporal reasoning.

  16. Easy web interfaces to IDL code for NSTX Data Analysis

    International Nuclear Information System (INIS)

    Davis, W.M.

    2012-01-01

    Highlights: ► Web interfaces to IDL code can be developed quickly. ► Dozens of Web Tools are used effectively on NSTX for Data Analysis. ► Web interfaces are easier to use than X-window applications. - Abstract: Reusing code is a well-known Software Engineering practice to substantially increase the efficiency of code production, as well as to reduce errors and debugging time. A variety of “Web Tools” for the analysis and display of raw and analyzed physics data are in use on NSTX [1], and new ones can be produced quickly from existing IDL [2] code. A Web Tool with only a few inputs, and which calls an IDL routine written in the proper style, can be created in less than an hour; more typical Web Tools with dozens of inputs, and the need for some adaptation of existing IDL code, can be working in a day or so. Efficiency is also increased for users of Web Tools because of the familiar interface of the web browser, and not needing X-windows, or accounts and passwords, when used within our firewall. Web Tools were adapted for use by PPPL physicists accessing EAST data stored in MDSplus with only a few man-weeks of effort; adapting to additional sites should now be even easier. An overview of Web Tools in use on NSTX, and a list of the most useful features, is also presented.

  17. Imagining the truth and the moon: an electrophysiological study of abstract and concrete word processing.

    Science.gov (United States)

    Gullick, Margaret M; Mitra, Priya; Coch, Donna

    2013-05-01

    Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.

  18. An approach for coupled-code multiphysics core simulations from a common input

    International Nuclear Information System (INIS)

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger; Clarno, Kevin; Simunovic, Srdjan; Slattery, Stuart; Turner, John; Palmtag, Scott

    2015-01-01

    Highlights: • We describe an approach for coupled-code multiphysics reactor core simulations. • The approach can enable tight coupling of distinct physics codes with a common input. • Multi-code multiphysics coupling and parallel data transfer issues are explained. • The common input approach and how the information is processed is described. • Capabilities are demonstrated on an eigenvalue and power distribution calculation. - Abstract: This paper describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak

  19. Abstract Interpretation and Attribute Gramars

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semantics-based program analysis method. A large class of data flow analysis problems can be expressed as non-standard ...... is presented in the thesis. Methods from abstract interpretation can also be used in correctness proofs of attribute grammars. This proof technique introduces a new class of attribute grammars based on domain theory. This method is illustrated with examples....

  20. Is the Abstract a Mere Teaser? Evaluating Generosity of Article Abstracts in the Environmental Sciences

    Directory of Open Access Journals (Sweden)

    Liana Ermakova

    2018-05-01

    Full Text Available An abstract is not only a mirror of the full article; it also aims to draw attention to the most important information of the document it summarizes. Many studies have compared abstracts with full texts for their informativeness. In contrast to previous studies, we propose to investigate this relation based not only on the amount of information given by the abstract but also on its importance. The main objective of this paper is to introduce a new metric called GEM to measure the “generosity” or representativeness of an abstract. Schematically speaking, a generous abstract should have the best possible score of similarity for the sections important to the reader. Based on a questionnaire gathering information from 630 researchers, we were able to weight sections according to their importance. In our approach, seven sections were first automatically detected in the full text. The accuracy of this classification into sections was above 80% compared with a dataset of documents where sentences were assigned to sections by experts. Second, each section was weighted according to the questionnaire results. The GEM score was then calculated as a sum of weights of sections in the full text corresponding to sentences in the abstract normalized over the total sum of weights of sections in the full text. The correlation between GEM score and the mean of the scores assigned by annotators was higher than the correlation between scores from different experts. As a case study, the GEM score was calculated for 36,237 articles in environmental sciences (1930–2013 retrieved from the French ISTEX database. The main result was that GEM score has increased over time. Moreover, this trend depends on subject area and publisher. No correlation was found between GEM score and citation rate or open access status of articles. We conclude that abstracts are more generous in recent publications and cannot be considered as mere teasers. This research should be pursued

  1. The Nuremberg Code subverts human health and safety by requiring animal modeling

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-07-01

    Full Text Available Abstract Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.

  2. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  3. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  4. Efficient abstraction selection in reinforcement learning

    NARCIS (Netherlands)

    Seijen, H. van; Whiteson, S.; Kester, L.

    2013-01-01

    This paper introduces a novel approach for abstraction selection in reinforcement learning problems modelled as factored Markov decision processes (MDPs), for which a state is described via a set of state components. In abstraction selection, an agent must choose an abstraction from a set of

  5. Indico CONFERENCE: Define the Call for Abstracts

    CERN Multimedia

    CERN. Geneva; Ferreira, Pedro

    2017-01-01

    In this tutorial, you will learn how to define and open a call for abstracts. When defining a call for abstracts, you will be able to define settings related to the type of questions asked during a review of an abstract, select the users who will review the abstracts, decide when to open the call for abstracts, and more.

  6. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  7. Applications of Derandomization Theory in Coding

    Science.gov (United States)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  8. Introduction to abstract algebra, solutions manual

    CERN Document Server

    Nicholson, W Keith

    2012-01-01

    Praise for the Third Edition ". . . an expository masterpiece of the highest didactic value that has gained additional attractivity through the various improvements . . ."-Zentralblatt MATH The Fourth Edition of Introduction to Abstract Algebra continues to provide an accessible approach to the basic structures of abstract algebra: groups, rings, and fields. The book's unique presentation helps readers advance to abstract theory by presenting concrete examples of induction, number theory, integers modulo n, and permutations before the abstract structures are defined. Readers can immediately be

  9. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  10. Analysis of a small PWR core with the PARCS/Helios and PARCS/Serpent code systems

    International Nuclear Information System (INIS)

    Baiocco, G.; Petruzzi, A.; Bznuni, S.; Kozlowski, T.

    2017-01-01

    Highlights: • The consistency between Helios and Serpent few-group cross sections is shown. • The PARCS model is validated against a Monte Carlo 3D model. • The fission and capture rates are compared. • The influence of the spacer grids on the axial power distribution is shown. - Abstract: Lattice physics codes are primarily used to generate cross-section data for nodal codes. In this work the methodology of homogenized constant generation was applied to a small Pressurized Water Reactor (PWR) core, using the deterministic code Helios and the Monte Carlo code Serpent. Subsequently, a 3D analysis of the PWR core was performed with the nodal diffusion code PARCS using the two-group cross section data sets generated by Helios and Serpent. Moreover, a full 3D model of the PWR core was developed using Serpent in order to obtain a reference solution. Several parameters, such as k eff , axial and radial power, fission and capture rates were compared and found to be in good agreement.

  11. Abstracts of contributed papers

    Energy Technology Data Exchange (ETDEWEB)

    1994-08-01

    This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.

  12. Feasibility analysis of the modified ATHLET code for supercritical water cooled systems

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Chong, E-mail: ch.zhou@sjtu.edu.cn [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China); Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Priessnitz-Str. 3, 76131 Karlsruhe (Germany); Yang Yanhua [School of Nuclear Science and Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China); Cheng Xu [Institute of Fusion and Reactor Technology, Karlsruhe Institute of Technology, Vincenz-Priessnitz-Str. 3, 76131 Karlsruhe (Germany)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer Modification of system code ATHLET for supercritical water application. Black-Right-Pointing-Pointer Development and assessment of a heat transfer package for supercritical water. Black-Right-Pointing-Pointer Validation of the modified code at supercritical pressures with the theoretical point-hydraulics model and the SASC code. Black-Right-Pointing-Pointer Application of the modified code to LOCA analysis of a supercritical water cooled in-pile fuel qualification test loop. - Abstract: Since the existing thermal-hydraulic computer codes for light water reactors are not applicable to supercritical water cooled reactors (SCWRs) owing to the limitation of physical models and numerical treatments, the development of a reliable thermal-hydraulic computer code is very important to design analysis and safety assessment of SCWRs. Based on earlier modification of ATHLET for SCWR, a general interface is implemented to the code, which serves as the platform for information exchange between ATHLET and the external independent physical modules. A heat transfer package containing five correlations for supercritical water is connected to the ATHLET code through the interface. The correlations are assessed with experimental data. To verify the modified ATHLET code, the Edwards-O'Brian blow-down test is simulated. As first validation at supercritical pressures, a simplified supercritical water cooled loop is modeled and its stability behavior is analyzed. Results are compared with that of the theoretical model and SASC code in the reference and show good agreement. To evaluate its feasibility, the modified ATHLET code is applied to a supercritical water cooled in-pile fuel qualification test loop. Loss of coolant accidents (LOCAs) due to break of coolant supply lines are calculated for the loop. Sensitivity analysis of some safety system parameters is performed to get further knowledge about their influence on the function of the

  13. Advance Organizers: Concret Versus Abstract.

    Science.gov (United States)

    Corkill, Alice J.; And Others

    1988-01-01

    Two experiments examined the relative effects of concrete and abstract advance organizers on students' memory for subsequent prose. Results of the experiments are discussed in terms of the memorability, familiarity, and visualizability of concrete and abstract verbal materials. (JD)

  14. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  15. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  16. Computer code ENDSAM for random sampling and validation of the resonance parameters covariance matrices of some major nuclear data libraries

    International Nuclear Information System (INIS)

    Plevnik, Lucijan; Žerovnik, Gašper

    2016-01-01

    Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.

  17. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  18. Verification of spectral burn-up codes on 2D fuel assemblies of the GFR demonstrator ALLEGRO reactor

    International Nuclear Information System (INIS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Dařílek, Petr; Zajac, Radoslav; Nečas, Vladimír; Haščik, Ján

    2014-01-01

    Highlights: • Verification of the MCNPX, HELIOS and SCALE codes. • MOX and ceramic fuel assembly. • Gas-cooled fast reactor. • Burnup calculation. - Abstract: The gas-cooled fast reactor, which is one of the six GEN IV reactor concepts, is characterized by high operational temperatures and a hard neutron spectrum. The utilization of commonly used spectral codes, developed mainly for LWR reactors operated in the thermal/epithermal neutron spectrum, may be connected with systematic deviations since the main development effort of these codes has been focused on the thermal part of the neutron spectrum. To be able to carry out proper calculations for fast systems the used codes have to account for neutron resonances including the self-shielding effect. The presented study aims at verifying the spectral HELIOS, MCNPX and SCALE codes on the basis of depletion calculations of 2D MOX and ceramic fuel assemblies of the ALLEGRO gas-cooled fast reactor demonstrator in infinite lattice

  19. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  20. Self-shielding models of MICROX-2 code: Review and updates

    International Nuclear Information System (INIS)

    Hou, J.; Choi, H.; Ivanov, K.N.

    2014-01-01

    Highlights: • The MICROX-2 code has been improved to expand its application to advanced reactors. • New fine-group cross section libraries based on ENDF/B-VII have been generated. • Resonance self-shielding and spatial self-shielding models have been improved. • The improvements were assessed by a series of benchmark calculations against MCNPX. - Abstract: The MICROX-2 is a transport theory code that solves for the neutron slowing-down and thermalization equations of a two-region lattice cell. The MICROX-2 code has been updated to expand its application to advanced reactor concepts and fuel cycle simulations, including generation of new fine-group cross section libraries based on ENDF/B-VII. In continuation of previous work, the MICROX-2 methods are reviewed and updated in this study, focusing on its resonance self-shielding and spatial self-shielding models for neutron spectrum calculations. The improvement of self-shielding method was assessed by a series of benchmark calculations against the Monte Carlo code, using homogeneous and heterogeneous pin cell models. The results have shown that the implementation of the updated self-shielding models is correct and the accuracy of physics calculation is improved. Compared to the existing models, the updates reduced the prediction error of the infinite multiplication factor by ∼0.1% and ∼0.2% for the homogeneous and heterogeneous pin cell models, respectively, considered in this study

  1. Dual coding: a cognitive model for psychoanalytic research.

    Science.gov (United States)

    Bucci, W

    1985-01-01

    Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code

  2. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  3. Automata Learning through Counterexample Guided Abstraction Refinement

    DEFF Research Database (Denmark)

    Aarts, Fides; Heidarian, Faranak; Kuppens, Harco

    2012-01-01

    to a small set of abstract events that can be handled by automata learning tools. In this article, we show how such abstractions can be constructed fully automatically for a restricted class of extended finite state machines in which one can test for equality of data parameters, but no operations on data...... are allowed. Our approach uses counterexample-guided abstraction refinement: whenever the current abstraction is too coarse and induces nondeterministic behavior, the abstraction is refined automatically. Using Tomte, a prototype tool implementing our algorithm, we have succeeded to learn – fully......Abstraction is the key when learning behavioral models of realistic systems. Hence, in most practical applications where automata learning is used to construct models of software components, researchers manually define abstractions which, depending on the history, map a large set of concrete events...

  4. Analysis of complex networks using aggressive abstraction.

    Energy Technology Data Exchange (ETDEWEB)

    Colbaugh, Richard; Glass, Kristin.; Willard, Gerald

    2008-10-01

    This paper presents a new methodology for analyzing complex networks in which the network of interest is first abstracted to a much simpler (but equivalent) representation, the required analysis is performed using the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit abstractions that are simultaneously dramatically simplifying and property preserving we call these aggressive abstractions -- and which can therefore be analyzed using the proposed approach. We then introduce and develop two forms of aggressive abstraction: 1.) finite state abstraction, in which dynamical networks with uncountable state spaces are modeled using finite state systems, and 2.) onedimensional abstraction, whereby high dimensional network dynamics are captured in a meaningful way using a single scalar variable. In each case, the property preserving nature of the abstraction process is rigorously established and efficient algorithms are presented for computing the abstraction. The considerable potential of the proposed approach to complex networks analysis is illustrated through case studies involving vulnerability analysis of technological networks and predictive analysis for social processes.

  5. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  6. Letter position coding across modalities: the case of Braille readers.

    Directory of Open Access Journals (Sweden)

    Manuel Perea

    Full Text Available The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words.Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters.We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities.The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality occur at a relatively late, abstract locus.

  7. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  8. Abstraction by Set-Membership

    DEFF Research Database (Denmark)

    Mödersheim, Sebastian Alexander

    2010-01-01

    The abstraction and over-approximation of protocols and web services by a set of Horn clauses is a very successful method in practice. It has however limitations for protocols and web services that are based on databases of keys, contracts, or even access rights, where revocation is possible, so...... that the set of true facts does not monotonically grow with the transitions. We extend the scope of these over-approximation methods by defining a new way of abstraction that can handle such databases, and we formally prove that the abstraction is sound. We realize a translator from a convenient specification...... language to standard Horn clauses and use the verifier ProVerif and the theorem prover SPASS to solve them. We show by a number of examples that this approach is practically feasible for wide variety of verification problems of security protocols and web services....

  9. Abstract Objects of Verbs

    DEFF Research Database (Denmark)

    Robering, Klaus

    2014-01-01

    Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which these obj......Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which...... these objects represent non-objectual entities in contexts from which they are excluded by type restrictions. Thus these objects are "abstract'' in a functional rather than in an ontological sense: they function as representatives of other entities but they are otherwise quite normal objects. Three examples...

  10. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  11. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  12. LAVENDER: A steady-state core analysis code for design studies of accelerator driven subcritical reactors

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shengcheng; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn; Huang, Kai; He, Mingtao; Li, Xunzhao

    2014-10-15

    Highlights: • A new code system for design studies of accelerator driven subcritical reactors (ADSRs) is developed. • S{sub N} transport solver in triangular-z meshes, fine deletion analysis and multi-channel thermal-hydraulics analysis are coupled in the code. • Numerical results indicate that the code is reliable and efficient for design studies of ADSRs. - Abstract: Accelerator driven subcritical reactors (ADSRs) have been proposed and widely investigated for the transmutation of transuranics (TRUs). ADSRs have several special characteristics, such as the subcritical core driven by spallation neutrons, anisotropic neutron flux distribution and complex geometry etc. These bring up requirements for development or extension of analysis codes to perform design studies. A code system named LAVENDER has been developed in this paper. It couples the modules for spallation target simulation and subcritical core analysis. The neutron transport-depletion calculation scheme is used based on the homogenized cross section from assembly calculations. A three-dimensional S{sub N} nodal transport code based on triangular-z meshes is employed and a multi-channel thermal-hydraulics analysis model is integrated. In the depletion calculation, the evolution of isotopic composition in the core is evaluated using the transmutation trajectory analysis algorithm (TTA) and fine depletion chains. The new code is verified by several benchmarks and code-to-code comparisons. Numerical results indicate that LAVENDER is reliable and efficient to be applied for the steady-state analysis and reactor core design of ADSRs.

  13. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  14. Abstract concepts in grounded cognition

    NARCIS (Netherlands)

    Lakens, D.

    2010-01-01

    When people think about highly abstract concepts, they draw upon concrete experiences to structure their thoughts. For example, black knights in fairytales are evil, and knights in shining armor are good. The sensory experiences black and white are used to represent the abstract concepts of good and

  15. Code development for eigenvalue total sensitivity analysis and total uncertainty analysis

    International Nuclear Information System (INIS)

    Wan, Chenghui; Cao, Liangzhi; Wu, Hongchun; Zu, Tiejun; Shen, Wei

    2015-01-01

    Highlights: • We develop a new code for total sensitivity and uncertainty analysis. • The implicit effects of cross sections can be considered. • The results of our code agree well with TSUNAMI-1D. • Detailed analysis for origins of implicit effects is performed. - Abstract: The uncertainties of multigroup cross sections notably impact eigenvalue of neutron-transport equation. We report on a total sensitivity analysis and total uncertainty analysis code named UNICORN that has been developed by applying the direct numerical perturbation method and statistical sampling method. In order to consider the contributions of various basic cross sections and the implicit effects which are indirect results of multigroup cross sections through resonance self-shielding calculation, an improved multigroup cross-section perturbation model is developed. The DRAGON 4.0 code, with application of WIMSD-4 format library, is used by UNICORN to carry out the resonance self-shielding and neutron-transport calculations. In addition, the bootstrap technique has been applied to the statistical sampling method in UNICORN to obtain much steadier and more reliable uncertainty results. The UNICORN code has been verified against TSUNAMI-1D by analyzing the case of TMI-1 pin-cell. The numerical results show that the total uncertainty of eigenvalue caused by cross sections can reach up to be about 0.72%. Therefore the contributions of the basic cross sections and their implicit effects are not negligible

  16. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  17. VVER 1000 SBO calculations with pressuriser relief valve stuck open with ASTEC computer code

    International Nuclear Information System (INIS)

    Atanasova, B.P.; Stefanova, A.E.; Groudev, P.P.

    2012-01-01

    Highlights: ► We modelled the ASTEC input file for accident scenario (SBO) and focused analyses on the behaviour of core degradation. ► We assumed opening and stuck-open of pressurizer relief valve during performance of SBO scenario. ► ASTEC v1.3.2 has been used as a reference code for the comparison study with the new version of ASTEC code. - Abstract: The objective of this paper is to present the results obtained from performing the calculations with ASTEC computer code for the Source Term evaluation for specific severe accident transient. The calculations have been performed with the new version of ASTEC. The ASTEC V2 code version is released by the French IRSN (Institut de Radioprotection at de surete nucleaire) and Gesellschaft für Anlagen-und Reaktorsicherheit (GRS), Germany. This investigation has been performed in the framework of the SARNET2 project (under the Euratom 7th framework program) by Institute for Nuclear Research and Nuclear Energy – Bulgarian Academy of Science (INRNE-BAS).

  18. Ghana Science Abstracts

    International Nuclear Information System (INIS)

    Entsua-Mensah, C.

    2004-01-01

    This issue of the Ghana Science Abstracts combines in one publication all the country's bibliographic output in science and technology. The objective is to provide a quick reference source to facilitate the work of information professionals, research scientists, lecturers and policy makers. It is meant to give users an idea of the depth and scope and results of the studies and projects carried out. The scope and coverage comprise research outputs, conference proceedings and periodical articles published in Ghana. It does not capture those that were published outside Ghana. Abstracts reported have been grouped under the following subject areas: Agriculture, Biochemistry, Biodiversity conservation, biological sciences, biotechnology, chemistry, dentistry, engineering, environmental management, forestry, information management, mathematics, medicine, physics, nuclear science, pharmacy, renewable energy and science education

  19. Building Safe Concurrency Abstractions

    DEFF Research Database (Denmark)

    Madsen, Ole Lehrmann

    2014-01-01

    Concurrent object-oriented programming in Beta is based on semaphores and coroutines and the ability to define high-level concurrency abstractions like monitors, and rendezvous-based communication, and their associated schedulers. The coroutine mechanism of SIMULA has been generalized into the no......Concurrent object-oriented programming in Beta is based on semaphores and coroutines and the ability to define high-level concurrency abstractions like monitors, and rendezvous-based communication, and their associated schedulers. The coroutine mechanism of SIMULA has been generalized...

  20. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  1. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  2. Impact of Concreteness on Comprehensibility, Interest, and Memory for Text: Implications for Dual Coding Theory and Text Design.

    Science.gov (United States)

    Sadoski, Mark; And Others

    1993-01-01

    The comprehensibility, interestingness, familiarity, and memorability of concrete and abstract instructional texts were studied in 4 experiments involving 221 college students. Results indicate that concreteness (ease of imagery) is the variable overwhelmingly most related to comprehensibility and recall. Dual coding theory and schema theory are…

  3. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  4. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  5. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  6. Neural Elements for Predictive Coding

    Directory of Open Access Journals (Sweden)

    Stewart SHIPP

    2016-11-01

    Full Text Available Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backwards in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many ‘illusory’ instances of perception where what is seen (heard, etc is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forwards and backwards pathways should be completely separate, given their functional distinction; this aspect of circuitry – that neurons with extrinsically bifurcating axons do not project in both directions – has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy formulation of predictive coding is combined with the classic ‘canonical microcircuit’ and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a updates in the microcircuitry of primate visual cortex, and (b rapid technical advances made

  7. Neural Elements for Predictive Coding.

    Science.gov (United States)

    Shipp, Stewart

    2016-01-01

    Predictive coding theories of sensory brain function interpret the hierarchical construction of the cerebral cortex as a Bayesian, generative model capable of predicting the sensory data consistent with any given percept. Predictions are fed backward in the hierarchy and reciprocated by prediction error in the forward direction, acting to modify the representation of the outside world at increasing levels of abstraction, and so to optimize the nature of perception over a series of iterations. This accounts for many 'illusory' instances of perception where what is seen (heard, etc.) is unduly influenced by what is expected, based on past experience. This simple conception, the hierarchical exchange of prediction and prediction error, confronts a rich cortical microcircuitry that is yet to be fully documented. This article presents the view that, in the current state of theory and practice, it is profitable to begin a two-way exchange: that predictive coding theory can support an understanding of cortical microcircuit function, and prompt particular aspects of future investigation, whilst existing knowledge of microcircuitry can, in return, influence theoretical development. As an example, a neural inference arising from the earliest formulations of predictive coding is that the source populations of forward and backward pathways should be completely separate, given their functional distinction; this aspect of circuitry - that neurons with extrinsically bifurcating axons do not project in both directions - has only recently been confirmed. Here, the computational architecture prescribed by a generalized (free-energy) formulation of predictive coding is combined with the classic 'canonical microcircuit' and the laminar architecture of hierarchical extrinsic connectivity to produce a template schematic, that is further examined in the light of (a) updates in the microcircuitry of primate visual cortex, and (b) rapid technical advances made possible by transgenic neural

  8. Content Analysis by the Crowd: Assessing the Usability of Crowdsourcing for Coding Latent Constructs

    Science.gov (United States)

    Lind, Fabienne; Gruber, Maria; Boomgaarden, Hajo G.

    2017-01-01

    ABSTRACT Crowdsourcing platforms are commonly used for research in the humanities, social sciences and informatics, including the use of crowdworkers to annotate textual material or visuals. Utilizing two empirical studies, this article systematically assesses the potential of crowdcoding for less manifest contents of news texts, here focusing on political actor evaluations. Specifically, Study 1 compares the reliability and validity of crowdcoded data to that of manual content analyses; Study 2 proceeds to investigate the effects of material presentation, different types of coding instructions and answer option formats on data quality. We find that the performance of the crowd recommends crowdcoded data as a reliable and valid alternative to manually coded data, also for less manifest contents. While scale manipulations affected the results, minor modifications of the coding instructions or material presentation did not significantly influence data quality. In sum, crowdcoding appears a robust instrument to collect quantitative content data. PMID:29118893

  9. Hydrogen abstraction reactions by amide electron adducts

    International Nuclear Information System (INIS)

    Sevilla, M.D.; Sevilla, C.L.; Swarts, S.

    1982-01-01

    Electron reactions with a number of peptide model compounds (amides and N-acetylamino acids) in aqueous glasses at low temperature have been investigated using ESR spectroscopy. The radicals produced by electron attachment to amides, RC(OD)NDR', are found to act as hydrogen abstracting agents. For example, the propionamide electron adduct is found to abstract from its parent propionamide. Electron adducts of other amides investigated show similar behavior except for acetamide electron adduct which does not abstract from its parent compound, but does abstract from other amides. The tendency toward abstraction for amide electron adducts are compared to electron adducts of several carboxylic acids, ketones, aldehydes and esters. The comparison suggests the hydrogen abstraction tendency of the various deuterated electron adducts (DEAs) to be in the following order: aldehyde DEA > acid DEA = approximately ester DEA > ketone DEA > amide DEA. In basic glasses the hydrogen abstraction ability of the amide electron adducts is maintained until the concentration of base is increased sufficiently to convert the DEA to its anionic form, RC(O - )ND 2 . In this form the hydrogen abstracting ability of the radical is greatly diminished. Similar results were found for the ester and carboxylic acid DEA's tested. (author)

  10. Minimalism in architecture: Abstract conceptualization of architecture

    Directory of Open Access Journals (Sweden)

    Vasilski Dragana

    2015-01-01

    Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.

  11. Reading strategies of Chinese students with severe to profound hearing loss.

    Science.gov (United States)

    Cheung, Ka Yan; Leung, Man Tak; McPherson, Bradley

    2013-01-01

    The present study investigated the significance of auditory discrimination and the use of phonological and orthographic codes during the course of reading development in Chinese students who are deaf or hard of hearing (D/HH). In this study, the reading behaviors of D/HH students in 2 tasks-a task on auditory perception of onset rime and a synonym decision task-were compared with those of their chronological age-matched and reading level (RL)-matched controls. Cross-group comparison of the performances of participants in the task on auditory perception suggests that poor auditory discrimination ability may be a possible cause of reading problems for D/HH students. In addition, results of the synonym decision task reveal that D/HH students with poor reading ability demonstrate a significantly greater preference for orthographic rather than phonological information, when compared with the D/HH students with good reading ability and their RL-matched controls. Implications for future studies and educational planning are discussed.

  12. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  13. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  14. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  15. A portable, parallel, object-oriented Monte Carlo neutron transport code in C++

    International Nuclear Information System (INIS)

    Lee, S.R.; Cummings, J.C.; Nolen, S.D.

    1997-01-01

    We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed

  16. Control rod drop transient analysis with the coupled parallel code pCTF-PARCSv2.7

    International Nuclear Information System (INIS)

    Ramos, Enrique; Roman, Jose E.; Abarca, Agustín; Miró, Rafael; Bermejo, Juan A.

    2016-01-01

    Highlights: • An MPI parallel version of the thermal–hydraulic subchannel code COBRA-TF has been developed. • The parallel code has been coupled to the 3D neutron diffusion code PARCSv2.7. • The new codes are validated with a control rod drop transient. - Abstract: In order to reduce the response time when simulating large reactors in detail, a parallel version of the thermal–hydraulic subchannel code COBRA-TF (CTF) has been developed using the standard Message Passing Interface (MPI). The parallelization is oriented to reactor cells, so it is best suited for models consisting of many cells. The generation of the Jacobian matrix is parallelized, in such a way that each processor is in charge of generating the data associated with a subset of cells. Also, the solution of the linear system of equations is done in parallel, using the PETSc toolkit. With the goal of creating a powerful tool to simulate the reactor core behavior during asymmetrical transients, the 3D neutron diffusion code PARCSv2.7 (PARCS) has been coupled with the parallel version of CTF (pCTF) using the Parallel Virtual Machine (PVM) technology. In order to validate the correctness of the parallel coupled code, a control rod drop transient has been simulated comparing the results with the real experimental measures acquired during an NPP real test.

  17. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  18. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  19. The Complexity of Abstract Machines

    Directory of Open Access Journals (Sweden)

    Beniamino Accattoli

    2017-01-01

    Full Text Available The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet—quite mysteriously—the efficiency of these machines with respect to the strategy that they implement has almost never been studied. This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before.

  20. Abstraction of man-made shapes

    KAUST Repository

    Mehra, Ravish; Zhou, Qingnan; Long, Jeremy; Sheffer, Alla; Gooch, Amy Ashurst; Mitra, Niloy J.

    2009-01-01

    Man-made objects are ubiquitous in the real world and in virtual environments. While such objects can be very detailed, capturing every small feature, they are often identified and characterized by a small set of defining curves. Compact, abstracted shape descriptions based on such curves are often visually more appealing than the original models, which can appear to be visually cluttered. We introduce a novel algorithm for abstracting three-dimensional geometric models using characteristic curves or contours as building blocks for the abstraction. Our method robustly handles models with poor connectivity, including the extreme cases of polygon soups, common in models of man-made objects taken from online repositories. In our algorithm, we use a two-step procedure that first approximates the input model using a manifold, closed envelope surface and then extracts from it a hierarchical abstraction curve network along with suitable normal information. The constructed curve networks form a compact, yet powerful, representation for the input shapes, retaining their key shape characteristics while discarding minor details and irregularities. © 2009 ACM.

  1. Scientific meeting abstracts

    International Nuclear Information System (INIS)

    1999-01-01

    The document is a collection of the scientific meeting abstracts in the fields of nuclear physics, medical sciences, chemistry, agriculture, environment, engineering, different aspects of energy and presents research done in 1999 in these fields

  2. Abstract Objects of Verbs

    DEFF Research Database (Denmark)

    2014-01-01

    Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which...... these objects represent non-objectual entities in contexts from which they are excluded by type restrictions. Thus these objects are "abstract'' in a functional rather than in an ontological sense: they function as representatives of other entities but they are otherwise quite normal objects. Three examples...

  3. Studies on the liquid fluoride thorium reactor: Comparative neutronics analysis of MCNP6 code with SRAC95 reactor analysis code based on FUJI-U3-(0)

    Energy Technology Data Exchange (ETDEWEB)

    Jaradat, S.Q., E-mail: sqjxv3@mst.edu; Alajo, A.B., E-mail: alajoa@mst.edu

    2017-04-01

    Highlights: • The verification for FUJI-U3-(0)—a molten salt reactor—was performed. • The MCNP6 was used to study the reactor physics characteristics for FUJI-U3 type. • The results from the MCNP6 were comparable with the ones obtained from literature. - Abstract: The verification for FUJI-U3-(0)—a molten salt reactor—was performed. The reactor used LiF-BeF2-ThF4-UF4 as the mixed liquid fuel salt, and the core was graphite moderated. The MCNP6 code was used to study the reactor physics characteristics for the FUJI-U3-(0) reactor. Results for reactor physics characteristic of the FUJI-U3-(0) exist in literature, which were used as reference. The reference results were obtained using SRAC95 (a reactor analysis code) coupled with ORIGEN2 (a depletion code). Some modifications were made in the reconstruction of the FUJI-U3-(0) reactor in MCNP due to unavailability of more detailed description of the reactor core. The assumptions resulted in two representative models of the reactor. The results from the MCNP6 models were compared with the reference results obtained from literature. The results were comparable with each other, but with some notable differences. The differences are because of the approximations that were done on the SRAC95 model of the FUJI-U3 to simplify the simulation. Based on the results, it is concluded that MCNP6 code predicts well the overall simulation of neutronics analysis to the previous simulation works using SRAC95 code.

  4. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  5. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    International Nuclear Information System (INIS)

    Li, Jia; Jiang, Kecheng; Zhang, Xiaokang; Nie, Xingchen; Zhu, Qinjun; Liu, Songlin

    2016-01-01

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  6. Kinetic parameters evaluation of PWRs using static cell and core calculation codes

    International Nuclear Information System (INIS)

    Jahanbin, Ali; Malmir, Hessam

    2012-01-01

    Highlights: ► In this study, we have calculated effective delayed neutron fraction and prompt neutron lifetime in PWRs. ► New software has been developed to link the WIMS, BORGES and CITATION codes in Visual C computer programming language. ► This software is used for calculation of the kinetic parameters in a typical VVER-1000 and NOK Beznau reactor. ► The ratios ((β eff ) i )/((β eff ) core ) , which are the important input data for the reactivity accident analysis, are also calculated. - Abstract: In this paper, evaluation of the kinetic parameters (effective delayed neutron fraction and prompt neutron lifetime) in PWRs, using static cell and core calculation codes, is reported. A new software has been developed to link the WIMS, BORGES and CITATION codes in Visual C computer programming language. Using the WIMS cell calculation code, multigroup microscopic cross-sections and number densities of different materials can be generated in a binary file. By the use of BORGES code, these binary-form cross-sections and number densities are converted to a format readable by the CITATION core calculation code, by which the kinetic parameters can be finally obtained. This software is used for calculation of the kinetic parameters in a typical VVER-1000 and NOK Beznau reactor. The ratios ((β eff ) i )/((β eff ) core ) , which are the important input data for the reactivity accident analysis, are also calculated. Benchmarking of the results against the final safety analysis report (FSAR) of the aforementioned reactors shows very good agreements with these published documents.

  7. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jia, E-mail: lijia@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Jiang, Kecheng; Zhang, Xiaokang [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China); Nie, Xingchen [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Zhu, Qinjun; Liu, Songlin [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China)

    2016-12-15

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  8. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  9. Evaluating Data Abstraction Assistant, a novel software application for data abstraction during systematic reviews: protocol for a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Ian J. Saldanha

    2016-11-01

    Full Text Available Abstract Background Data abstraction, a critical systematic review step, is time-consuming and prone to errors. Current standards for approaches to data abstraction rest on a weak evidence base. We developed the Data Abstraction Assistant (DAA, a novel software application designed to facilitate the abstraction process by allowing users to (1 view study article PDFs juxtaposed to electronic data abstraction forms linked to a data abstraction system, (2 highlight (or “pin” the location of the text in the PDF, and (3 copy relevant text from the PDF into the form. We describe the design of a randomized controlled trial (RCT that compares the relative effectiveness of (A DAA-facilitated single abstraction plus verification by a second person, (B traditional (non-DAA-facilitated single abstraction plus verification by a second person, and (C traditional independent dual abstraction plus adjudication to ascertain the accuracy and efficiency of abstraction. Methods This is an online, randomized, three-arm, crossover trial. We will enroll 24 pairs of abstractors (i.e., sample size is 48 participants, each pair comprising one less and one more experienced abstractor. Pairs will be randomized to abstract data from six articles, two under each of the three approaches. Abstractors will complete pre-tested data abstraction forms using the Systematic Review Data Repository (SRDR, an online data abstraction system. The primary outcomes are (1 proportion of data items abstracted that constitute an error (compared with an answer key and (2 total time taken to complete abstraction (by two abstractors in the pair, including verification and/or adjudication. Discussion The DAA trial uses a practical design to test a novel software application as a tool to help improve the accuracy and efficiency of the data abstraction process during systematic reviews. Findings from the DAA trial will provide much-needed evidence to strengthen current recommendations for data

  10. The sound of enemies and friends in the neighborhood.

    Science.gov (United States)

    Pecher, Diane; Boot, Inge; van Dantzig, Saskia; Madden, Carol J; Huber, David E; Zeelenberg, René

    2011-01-01

    Previous studies (e.g., Pecher, Zeelenberg, & Wagenmakers, 2005) found that semantic classification performance is better for target words with orthographic neighbors that are mostly from the same semantic class (e.g., living) compared to target words with orthographic neighbors that are mostly from the opposite semantic class (e.g., nonliving). In the present study we investigated the contribution of phonology to orthographic neighborhood effects by comparing effects of phonologically congruent orthographic neighbors (book-hook) to phonologically incongruent orthographic neighbors (sand-wand). The prior presentation of a semantically congruent word produced larger effects on subsequent animacy decisions when the previously presented word was a phonologically congruent neighbor than when it was a phonologically incongruent neighbor. In a second experiment, performance differences between target words with versus without semantically congruent orthographic neighbors were larger if the orthographic neighbors were also phonologically congruent. These results support models of visual word recognition that assume an important role for phonology in cascaded access to meaning.

  11. Abstract Spatial Reasoning as an Autistic Strength

    Science.gov (United States)

    Stevenson, Jennifer L.; Gernsbacher, Morton Ann

    2013-01-01

    Autistic individuals typically excel on spatial tests that measure abstract reasoning, such as the Block Design subtest on intelligence test batteries and the Raven’s Progressive Matrices nonverbal test of intelligence. Such well-replicated findings suggest that abstract spatial processing is a relative and perhaps absolute strength of autistic individuals. However, previous studies have not systematically varied reasoning level – concrete vs. abstract – and test domain – spatial vs. numerical vs. verbal, which the current study did. Autistic participants (N = 72) and non-autistic participants (N = 72) completed a battery of 12 tests that varied by reasoning level (concrete vs. abstract) and domain (spatial vs. numerical vs. verbal). Autistic participants outperformed non-autistic participants on abstract spatial tests. Non-autistic participants did not outperform autistic participants on any of the three domains (spatial, numerical, and verbal) or at either of the two reasoning levels (concrete and abstract), suggesting similarity in abilities between autistic and non-autistic individuals, with abstract spatial reasoning as an autistic strength. PMID:23533615

  12. Phonological and orthographic influences in the bouba-kiki effect.

    Science.gov (United States)

    Cuskley, Christine; Simner, Julia; Kirby, Simon

    2017-01-01

    We examine a high-profile phenomenon known as the bouba-kiki effect, in which non-word names are assigned to abstract shapes in systematic ways (e.g. rounded shapes are preferentially labelled bouba over kiki). In a detailed evaluation of the literature, we show that most accounts of the effect point to predominantly or entirely iconic cross-sensory mappings between acoustic or articulatory properties of sound and shape as the mechanism underlying the effect. However, these accounts have tended to confound the acoustic or articulatory properties of non-words with another fundamental property: their written form. We compare traditional accounts of direct audio or articulatory-visual mapping with an account in which the effect is heavily influenced by matching between the shapes of graphemes and the abstract shape targets. The results of our two studies suggest that the dominant mechanism underlying the effect for literate subjects is matching based on aligning letter curvature and shape roundedness (i.e. non-words with curved letters are matched to round shapes). We show that letter curvature is strong enough to significantly influence word-shape associations even in auditory tasks, where written word forms are never presented to participants. However, we also find an additional phonological influence in that voiced sounds are preferentially linked with rounded shapes, although this arises only in a purely auditory word-shape association task. We conclude that many previous investigations of the bouba-kiki effect may not have given appropriate consideration or weight to the influence of orthography among literate subjects.

  13. Science meeting. Abstracts

    International Nuclear Information System (INIS)

    2000-01-01

    the document is a collection of the science meeting abstracts in the fields of nuclear physics, medical sciences, chemistry, agriculture, environment, engineering, material sciences different aspects of energy and presents research done in 2000 in these fields

  14. National fuel cell seminar. Program and abstracts. [Abstracts of 40 papers

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-01-01

    Abstracts of 40 papers are presented. Topics include fuel cell systems, phosphoric acid fuel cells, molten carbonate fuel cells, solid fuel and solid electrolyte fuel cells, low temperature fuel cells, and fuel utilization. (WHK)

  15. Construct Abstraction for Automatic Information Abstraction from Digital Images

    Science.gov (United States)

    2006-05-30

    objects and features and the names of objects of objects and features. For example, in Figure 15 the parts of the fish could be named the ‘mouth... fish -1 fish -2 fish -3 tennis shoe tennis racquet...of abstraction and generality. For example, an algorithm might usefully find a polygon ( blob ) in an image and calculate numbers such as the

  16. FONESYS: The FOrum and NEtwork of SYStem Thermal-Hydraulic Codes in Nuclear Reactor Thermal-Hydraulics

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, S.H., E-mail: k175ash@kins.re.kr [Korea Institute of Nuclear Safety (KINS) (Korea, Republic of); Aksan, N., E-mail: nusr.aksan@gmail.com [University of Pisa San Piero a Grado Nuclear Research Group (GRNSPG) (Italy); Austregesilo, H., E-mail: henrique.austregesilo@grs.de [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) (Germany); Bestion, D., E-mail: dominique.bestion@cea.fr [Commissariat à l’énergie atomique et aux énergies alternatives (CEA) (France); Chung, B.D., E-mail: bdchung@kaeri.re.kr [Korea Atomic Energy Research Institute (KAERI) (Korea, Republic of); D’Auria, F., E-mail: f.dauria@ing.unipi.it [University of Pisa San Piero a Grado Nuclear Research Group (GRNSPG) (Italy); Emonot, P., E-mail: philippe.emonot@cea.fr [Commissariat à l’énergie atomique et aux énergies alternatives (CEA) (France); Gandrille, J.L., E-mail: jeanluc.gandrille@areva.com [AREVA NP (France); Hanninen, M., E-mail: markku.hanninen@vtt.fi [VTT Technical Research Centre of Finland (VTT) (Finland); Horvatović, I., E-mail: i.horvatovic@ing.unipi.it [University of Pisa San Piero a Grado Nuclear Research Group (GRNSPG) (Italy); Kim, K.D., E-mail: kdkim@kaeri.re.kr [Korea Atomic Energy Research Institute (KAERI) (Korea, Republic of); Kovtonyuk, A., E-mail: a.kovtonyuk@ing.unipi.it [University of Pisa San Piero a Grado Nuclear Research Group (GRNSPG) (Italy); Petruzzi, A., E-mail: a.petruzzi@ing.unipi.it [University of Pisa San Piero a Grado Nuclear Research Group (GRNSPG) (Italy)

    2015-01-15

    Highlights: • We briefly presented the project called Forum and Network of System Thermal-Hydraulics Codes in Nuclear Reactor Thermal-Hydraulics (FONESYS). • We presented FONESYS participants and their codes. • We explained FONESYS projects motivation, its main targets and working modalities. • We presented FONESYS position about projects topics and subtopics. - Abstract: The purpose of this article is to present briefly the project called Forum and Network of System Thermal-Hydraulics Codes in Nuclear Reactor Thermal-Hydraulics (FONESYS), its participants, the motivation for the project, its main targets and working modalities. System Thermal-Hydraulics (SYS-TH) codes, also as part of the Best Estimate Plus Uncertainty (BEPU) approaches, are expected to achieve a more-and-more relevant role in nuclear reactor technology, safety and design. Namely, the number of code-users can easily be predicted to increase in the countries where nuclear technology is exploited. Thus, the idea of establishing a forum and a network among the code developers and with possible extension to code users has started to have major importance and value. In this framework the FONESYS initiative has been created. The main targets of FONESYS are: • To promote the use of SYS-TH Codes and the application of the BEPU approaches. • To establish acceptable and recognized procedures and thresholds for Verification and Validation (V and V). • To create a common ground for discussing envisaged improvements in various areas, including user-interface, and the connection with other numerical tools, including Computational Fluid Dynamics (CFD) Codes.

  17. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  18. Development and assessment of a sub-channel code applicable for trans-critical transient of SCWR

    International Nuclear Information System (INIS)

    Liu, X.J.; Yang, T.; Cheng, X.

    2013-01-01

    Highlights: • A new sub-channel code COBRA-SC for SCWR is developed. • Pseudo two-phase method is employed to realize trans-critical transient calculation. • Good suitability of COBRA-SC is demonstrated by preliminary assessment. • The calculation results of COBRA-SC agree well with ATHLET code. -- Abstract: In the last few years, extensive R and D activities have been launched covering various aspects of supercritical water-cooled reactor (SCWR), especially the thermal-hydraulic analysis. Sub-channel code plays an indispensable role to predict the detail thermal-hydraulic behavior of the SCWR fuel assembly. This paper develops a new version of sub-channel code COBRA-SC based on the previous COBRA-IV code. The supercritical water property and heat transfer/pressure drop correlations under supercritical pressure are implemented to this code. Moreover, in order to simulate the trans-critical transient (the pressure undergo a decrease from the supercritical pressure to the subcritical pressure), pseudo two-phase method is employed in COBRA-SC code. This work is completed by introduction of a virtual two-phase region near the pseudo-critical line. A smooth transition of void fraction can be realized. In addition, several heat transfer correlations right underneath the critical point are introduced into this code to capture the heat transfer behavior during the trans-critical transient. Some experimental data from simple geometry, e.g. the single tube, small rod bundle, is used to validate and evaluate this new developed COBRA-SC code. The predicted results show a good agreement with the experimental data, demonstrating good feasibility of this code for SCWR condition. A code to code comparison between COBRA-SC and ATHLET for a blowdown transient of a small fuel assembly is also presented and discussed in this paper

  19. Abstract methods in partial differential equations

    CERN Document Server

    Carroll, Robert W

    2012-01-01

    Detailed, self-contained treatment examines modern abstract methods in partial differential equations, especially abstract evolution equations. Suitable for graduate students with some previous exposure to classical partial differential equations. 1969 edition.

  20. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  1. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  2. Integration and visualization of non-coding RNA and protein interaction networks

    DEFF Research Database (Denmark)

    Junge, Alexander; Refsgaard, Jan Christian; Garde, Christian

    Non-coding RNAs (ncRNAs) fulfill a diverse set of biological functions relying on interactions with other molecular entities. The advent of new experimental and computational approaches makes it possible to study ncRNAs and their associations on an unprecedented scale. We present RAIN (RNA Associ......) co-occurrences found by text mining Medline abstracts. Each resource was assigned a reliability score by assessing its agreement with a gold standard set of microRNA-target interactions. RAIN is available at: http://rth.dk/resources/rain...

  3. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  4. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    Science.gov (United States)

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  5. Mechanical Engineering Department technical abstracts

    International Nuclear Information System (INIS)

    1984-01-01

    The Mechanical Engineering Department publishes abstracts twice a year to inform readers of the broad range of technical activities in the Department, and to promote an exchange of ideas. Details of the work covered by an abstract may be obtained by contacting the author(s). General information about the current role and activities of each of the Department's seven divisions precedes the technical abstracts. Further information about a division's work may be obtained from the division leader, whose name is given at the end of each divisional summary. The Department's seven divisions are as follows: Nuclear Test Engineering Division, Nuclear Explosives Engineering Division, Weapons Engineering Division, Energy Systems Engineering Division, Engineering Sciences Division, Magnetic Fusion Engineering Division and Materials Fabrication Division

  6. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  7. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  8. Interdisciplinary perspectives on abstracts for information retrieval

    Directory of Open Access Journals (Sweden)

    Soon Keng Chan

    2004-10-01

    Full Text Available The paper examines the abstract genre from the perspectives of English for Specific Purposes (ESP practitioners and information professionals. It aims to determine specific interdisciplinary interests in the abstract, and to explore areas of collaboration in terms of research and pedagogical practices. A focus group (FG comprising information professionals from the Division of Information Studies, Nanyang Technological University, Singapore, convened for a discussion on the subject of abstracts and abstracting. Two major issues that have significant implications for ESP practices emerged during the discussion. While differences in terms of approach to and objectives of the abstract genre are apparent between information professionals and language professionals, the demands for specific cognitive processes involved in abstracting proved to be similar. This area of similarity provides grounds for awareness raising and collaboration between the two disciplines. While ESP practitioners need to consider adding the dimension of information science to the rhetorical and linguistic scaffolding that they have been providing to novice-writers, information professionals can contribute useful insights about the qualities of abstracts that have the greatest impact in meeting the end-users' needs in information search.

  9. Annual Conference Abstracts

    Science.gov (United States)

    Journal of Engineering Education, 1972

    1972-01-01

    Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…

  10. Abstracts

    Institute of Scientific and Technical Information of China (English)

    2017-01-01

    Supplementary Short Board: Orderly Cultivate Housing Leasing Market WANG Guangtao (Former Minister of Ministry of Construction) Abstract: In December 2016, Central Economic Work Conference proposed that to promote the steady and healthy development of the real estate market, it should adhere to the “house is used to live, not used to speculate” position. At present, the development of housing leasing market in China is lagging behind. It is urgent to improve the housing conditions of large cities and promote the urbanization of small and medium-sized cities. Therefore, it is imperative to innovate and supplement the short board to accelerate the development of housing leasing market.

  11. Abstracting audit data for lightweight intrusion detection

    KAUST Repository

    Wang, Wei

    2010-01-01

    High speed of processing massive audit data is crucial for an anomaly Intrusion Detection System (IDS) to achieve real-time performance during the detection. Abstracting audit data is a potential solution to improve the efficiency of data processing. In this work, we propose two strategies of data abstraction in order to build a lightweight detection model. The first strategy is exemplar extraction and the second is attribute abstraction. Two clustering algorithms, Affinity Propagation (AP) as well as traditional k-means, are employed to extract the exemplars, and Principal Component Analysis (PCA) is employed to abstract important attributes (a.k.a. features) from the audit data. Real HTTP traffic data collected in our institute as well as KDD 1999 data are used to validate the two strategies of data abstraction. The extensive test results show that the process of exemplar extraction significantly improves the detection efficiency and has a better detection performance than PCA in data abstraction. © 2010 Springer-Verlag.

  12. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  13. Theoretical Atomic Physics code development II: ACE: Another collisional excitation code

    International Nuclear Information System (INIS)

    Clark, R.E.H.; Abdallah, J. Jr.; Csanak, G.; Mann, J.B.; Cowan, R.D.

    1988-12-01

    A new computer code for calculating collisional excitation data (collision strengths or cross sections) using a variety of models is described. The code uses data generated by the Cowan Atomic Structure code or CATS for the atomic structure. Collisional data are placed on a random access file and can be displayed in a variety of formats using the Theoretical Atomic Physics Code or TAPS. All of these codes are part of the Theoretical Atomic Physics code development effort at Los Alamos. 15 refs., 10 figs., 1 tab

  14. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  15. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  16. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  17. Syntheses by rules of the speech signal in its amplitude-time representation - melody study - phonetic, translation program

    International Nuclear Information System (INIS)

    Santamarina, Carole

    1975-01-01

    The present paper deals with the real-time speech synthesis implemented on a minicomputer. A first program translates the orthographic text into a string of phonetic codes, which is then processed by the synthesis program itself. The method used, a synthesis by rules, directly computes the speech signal in its amplitude-time representation. Emphasis has been put on special cases (diphthongs, 'e muet', consonant-consonant transition) and the implementation of the rhythm and of the melody. (author) [fr

  18. Beyond the abstractions?

    DEFF Research Database (Denmark)

    Olesen, Henning Salling

    2006-01-01

      The anniversary of the International Journal of Lifelong Education takes place in the middle of a conceptual landslide from lifelong education to lifelong learning. Contemporary discourses of lifelong learning etc are however abstractions behind which new functions and agendas for adult education...

  19. Monadic abstract interpreters

    DEFF Research Database (Denmark)

    Sergey, Ilya; Devriese, Dominique; Might, Matthew

    2013-01-01

    to instrument an analysis with high-level strategies for improving precision and performance, such as abstract garbage collection and widening. While the paper itself runs the development for continuationpassing style, our generic implementation replays it for direct-style lambda-calculus and Featherweight Java...

  20. A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme

    Directory of Open Access Journals (Sweden)

    Winch Peter J

    2011-04-01

    Full Text Available Abstract Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large, location (urban/rural, and type (public/private. Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1 Discharge Summarization, 2 Completeness Checking, 3 Diagnosis and Procedure Coding, 4 Code Checking, 5 Relative Weight Challenging, 6 Coding Report, and 7 Internal Audit. The hospital coding practice can be affected by at least five main factors: 1 Internal Dynamics, 2 Management Context, 3 Financial Dependency, 4 Resource and Capacity, and 5 External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.

  1. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  2. Abstractions for Fault-Tolerant Distributed System Verification

    Science.gov (United States)

    Pike, Lee S.; Maddalon, Jeffrey M.; Miner, Paul S.; Geser, Alfons

    2004-01-01

    Four kinds of abstraction for the design and analysis of fault tolerant distributed systems are discussed. These abstractions concern system messages, faults, fault masking voting, and communication. The abstractions are formalized in higher order logic, and are intended to facilitate specifying and verifying such systems in higher order theorem provers.

  3. Is a Picture Worth a Thousand Words? Using Images to Create a Concreteness Effect for Abstract Words: Evidence from Beginning L2 Learners of Spanish

    Science.gov (United States)

    Farley, Andrew; Pahom, Olga; Ramonda, Kris

    2014-01-01

    This study examines the lexical representation and recall of abstract words by beginning L2 learners of Spanish in the light of the predictions of the dual coding theory (Paivio 1971; Paivio and Desrochers 1980). Ninety-seven learners (forty-four males and fifty-three females) were randomly placed in the picture or non-picture group and taught…

  4. Coding, cryptography and combinatorics

    CERN Document Server

    Niederreiter, Harald; Xing, Chaoping

    2004-01-01

    It has long been recognized that there are fascinating connections between cod­ ing theory, cryptology, and combinatorics. Therefore it seemed desirable to us to organize a conference that brings together experts from these three areas for a fruitful exchange of ideas. We decided on a venue in the Huang Shan (Yellow Mountain) region, one of the most scenic areas of China, so as to provide the additional inducement of an attractive location. The conference was planned for June 2003 with the official title Workshop on Coding, Cryptography and Combi­ natorics (CCC 2003). Those who are familiar with events in East Asia in the first half of 2003 can guess what happened in the end, namely the conference had to be cancelled in the interest of the health of the participants. The SARS epidemic posed too serious a threat. At the time of the cancellation, the organization of the conference was at an advanced stage: all invited speakers had been selected and all abstracts of contributed talks had been screened by the p...

  5. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  6. Abstract Interpretation as a Programming Language

    DEFF Research Database (Denmark)

    Rosendahl, Mads

    2013-01-01

    examine different programming styles and ways to represent states. Abstract interpretation is primarily a technique for derivation and specification of program analysis. As with denotational semantics we may also view abstract interpretations as programs and examine the implementation. The main focus...... in this paper is to show that results from higher-order strictness analysis may be used more generally as fixpoint operators for higher-order functions over lattices and thus provide a technique for immediate implementation of a large class of abstract interpretations. Furthermore, it may be seen...

  7. Recent development and application of a new safety analysis code for fusion reactors

    Energy Technology Data Exchange (ETDEWEB)

    Merrill, Brad J., E-mail: Brad.Merrill@inl.gov; Humrickhouse, Paul W.; Shimada, Masashi

    2016-11-01

    Highlights: • This paper presents recent code development activities for the MELCOR for fusion and Tritium Migration Analysis Program computer codes at the Idaho National Engineering Laboratory. • The capabilities of these computer codes are being merged into a single safety analysis tool for fusion reactor accidents. • The result of benchmarking these codes against previous code versions is presented by the authors of this paper. • This new capability is applied to study the tritium inventory and permeation rate for a water cold tungsten divertor that has neutron damage at 0.3 dpa. - Abstract: This paper describes the recent progress made in the development of two codes for fusion reactor safety assessments at the Idaho National Laboratory (INL): MELCOR for fusion and the Tritium Migration Analysis Program (TMAP). During the ITER engineering design activity (EDA), the INL Fusion Safety Program (FSP) modified the MELCOR 1.8.2 code for fusion applications to perform ITER thermal hydraulic safety analyses. Because MELCOR has undergone many improvements at SNL-NM since version 1.8.2 was released, the INL FSP recently imported these same fusion modifications into the MELCOR 1.8.6 code, along with the multiple fluids modifications of MELCOR 1.8.5 for fusion used in US advanced fusion reactor design studies. TMAP has also been under development for several decades at the INL by the FSP. TMAP treats multi-specie surface absorption and diffusion in composite materials with dislocation traps, plus the movement of these species from room to room by fluid flow within a given facility. Recently, TMAP was updated to consider multiple trap site types to allow the simulation of experimental data from neutron irradiated tungsten. The natural development path for both of these codes is to merge their capabilities into one computer code to provide a more comprehensive safety tool for analyzing accidents in fusion reactors. In this paper we detail recent developments in this

  8. Recent development and application of a new safety analysis code for fusion reactors

    International Nuclear Information System (INIS)

    Merrill, Brad J.; Humrickhouse, Paul W.; Shimada, Masashi

    2016-01-01

    Highlights: • This paper presents recent code development activities for the MELCOR for fusion and Tritium Migration Analysis Program computer codes at the Idaho National Engineering Laboratory. • The capabilities of these computer codes are being merged into a single safety analysis tool for fusion reactor accidents. • The result of benchmarking these codes against previous code versions is presented by the authors of this paper. • This new capability is applied to study the tritium inventory and permeation rate for a water cold tungsten divertor that has neutron damage at 0.3 dpa. - Abstract: This paper describes the recent progress made in the development of two codes for fusion reactor safety assessments at the Idaho National Laboratory (INL): MELCOR for fusion and the Tritium Migration Analysis Program (TMAP). During the ITER engineering design activity (EDA), the INL Fusion Safety Program (FSP) modified the MELCOR 1.8.2 code for fusion applications to perform ITER thermal hydraulic safety analyses. Because MELCOR has undergone many improvements at SNL-NM since version 1.8.2 was released, the INL FSP recently imported these same fusion modifications into the MELCOR 1.8.6 code, along with the multiple fluids modifications of MELCOR 1.8.5 for fusion used in US advanced fusion reactor design studies. TMAP has also been under development for several decades at the INL by the FSP. TMAP treats multi-specie surface absorption and diffusion in composite materials with dislocation traps, plus the movement of these species from room to room by fluid flow within a given facility. Recently, TMAP was updated to consider multiple trap site types to allow the simulation of experimental data from neutron irradiated tungsten. The natural development path for both of these codes is to merge their capabilities into one computer code to provide a more comprehensive safety tool for analyzing accidents in fusion reactors. In this paper we detail recent developments in this

  9. Finite mixture models for sensitivity analysis of thermal hydraulic codes for passive safety systems analysis

    Energy Technology Data Exchange (ETDEWEB)

    Di Maio, Francesco, E-mail: francesco.dimaio@polimi.it [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Nicola, Giancarlo [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Zio, Enrico [Energy Department, Politecnico di Milano, Via La Masa 34, 20156 Milano (Italy); Chair on System Science and Energetic Challenge Fondation EDF, Ecole Centrale Paris and Supelec, Paris (France); Yu, Yu [School of Nuclear Science and Engineering, North China Electric Power University, 102206 Beijing (China)

    2015-08-15

    Highlights: • Uncertainties of TH codes affect the system failure probability quantification. • We present Finite Mixture Models (FMMs) for sensitivity analysis of TH codes. • FMMs approximate the pdf of the output of a TH code with a limited number of simulations. • The approach is tested on a Passive Containment Cooling System of an AP1000 reactor. • The novel approach overcomes the results of a standard variance decomposition method. - Abstract: For safety analysis of Nuclear Power Plants (NPPs), Best Estimate (BE) Thermal Hydraulic (TH) codes are used to predict system response in normal and accidental conditions. The assessment of the uncertainties of TH codes is a critical issue for system failure probability quantification. In this paper, we consider passive safety systems of advanced NPPs and present a novel approach of Sensitivity Analysis (SA). The approach is based on Finite Mixture Models (FMMs) to approximate the probability density function (i.e., the uncertainty) of the output of the passive safety system TH code with a limited number of simulations. We propose a novel Sensitivity Analysis (SA) method for keeping the computational cost low: an Expectation Maximization (EM) algorithm is used to calculate the saliency of the TH code input variables for identifying those that most affect the system functional failure. The novel approach is compared with a standard variance decomposition method on a case study considering a Passive Containment Cooling System (PCCS) of an Advanced Pressurized reactor AP1000.

  10. Nuclear medicine. Abstracts

    International Nuclear Information System (INIS)

    Anon.

    2000-01-01

    This issue of the journal contains the abstracts of the 183 conference papers as well as 266 posters presented at the conference. Subject fields covered are: Neurology, psychology, oncology, pediatrics, radiopharmacy, endocrinology, EDP, measuring equipment and methods, radiological protection, cardiology, and therapy. (orig./CB) [de

  11. Validation and application of the system code ATHLET-CD for BWR severe accident analyses

    Energy Technology Data Exchange (ETDEWEB)

    Di Marcello, Valentino, E-mail: valentino.marcello@kit.edu; Imke, Uwe; Sanchez, Victor

    2016-10-15

    Highlights: • We present the application of the system code ATHLET-CD code for BWR safety analyses. • Validation of core in-vessel models is performed based on KIT CORA experiments. • A SB-LOCA scenario is simulated on a generic German BWR plant up to vessel failure. • Different core reflooding possibilities are investigated to mitigate the accident consequences. • ATHLET-CD modelling features reflect the current state of the art of severe accident codes. - Abstract: This paper is aimed at the validation and application of the system code ATHLET-CD for the simulation of severe accident phenomena in Boiling Water Reactors (BWR). The corresponding models for core degradation behaviour e.g., oxidation, melting and relocation of core structural components are validated against experimental data available from the CORA-16 and -17 bundle tests. Model weaknesses are discussed along with needs for further code improvements. With the validated ATHLET-CD code, calculations are performed to assess the code capabilities for the prediction of in-vessel late phase core behaviour and reflooding of damaged fuel rods. For this purpose, a small break LOCA scenario for a generic German BWR with postulated multiple failures of the safety systems was selected. In the analysis, accident management measures represented by cold water injection into the damaged reactor core are addressed to investigate the efficacy in avoiding or delaying the failure of the reactor pressure vessel. Results show that ATHLET-CD is applicable to the description of BWR plant behaviour with reliable physical models and numerical methods adopted for the description of key in-vessel phenomena.

  12. Error floor behavior study of LDPC codes for concatenated codes design

    Science.gov (United States)

    Chen, Weigang; Yin, Liuguo; Lu, Jianhua

    2007-11-01

    Error floor behavior of low-density parity-check (LDPC) codes using quantized decoding algorithms is statistically studied with experimental results on a hardware evaluation platform. The results present the distribution of the residual errors after decoding failure and reveal that the number of residual error bits in a codeword is usually very small using quantized sum-product (SP) algorithm. Therefore, LDPC code may serve as the inner code in a concatenated coding system with a high code rate outer code and thus an ultra low error floor can be achieved. This conclusion is also verified by the experimental results.

  13. Abstract Machines for Programming Language Implementation

    NARCIS (Netherlands)

    Diehl, Stephan; Hartel, Pieter H.; Sestoft, Peter

    We present an extensive, annotated bibliography of the abstract machines designed for each of the main programming paradigms (imperative, object oriented, functional, logic and concurrent). We conclude that whilst a large number of efficient abstract machines have been designed for particular

  14. Writing a Structured Abstract for the Thesis

    Science.gov (United States)

    Hartley, James

    2010-01-01

    This article presents the author's suggestions on how to improve thesis abstracts. The author describes two books on writing abstracts: (1) "Creating Effective Conference Abstracts and Posters in Biomedicine: 500 tips for Success" (Fraser, Fuller & Hutber, 2009), a compendium of clear advice--a must book to have in one's hand as one prepares a…

  15. When abstraction does not increase stereotyping : Preparing for intragroup communication enables abstract construal of stereotype-inconsistent information

    NARCIS (Netherlands)

    Greijdanus, Hedy; Postmes, Tom; Gordijn, Ernestine H.; van Zomeren, Martijn

    2014-01-01

    Two experiments investigated when perceivers can construe stereotype-inconsistent information abstractly (i.e., interpret observations as generalizable) and whether stereotype-consistency delimits the positive relation between abstract construal level and stereotyping. Participants (N1=104, N2=83)

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  17. CodeArmor : Virtualizing the Code Space to Counter Disclosure Attacks

    NARCIS (Netherlands)

    Chen, Xi; Bos, Herbert; Giuffrida, Cristiano

    2017-01-01

    Code diversification is an effective strategy to prevent modern code-reuse exploits. Unfortunately, diversification techniques are inherently vulnerable to information disclosure. Recent diversification-aware ROP exploits have demonstrated that code disclosure attacks are a realistic threat, with an

  18. Geometry of abstraction in quantum computation

    NARCIS (Netherlands)

    Pavlovic, Dusko; Abramsky, S.; Mislove, M.W.

    2012-01-01

    Quantum algorithms are sequences of abstract operations, per formed on non-existent computers. They are in obvious need of categorical semantics. We present some steps in this direction, following earlier contribu tions of Abramsky, Goecke and Selinger. In particular, we analyze function abstraction

  19. The Abstraction Engine

    DEFF Research Database (Denmark)

    Fortescue, Michael David

    The main thesis of this book is that abstraction, far from being confined to higher formsof cognition, language and logical reasoning, has actually been a major driving forcethroughout the evolution of creatures with brains. It is manifest in emotive as well as rationalthought. Wending its way th...

  20. Transport safety research abstracts. No. 1

    International Nuclear Information System (INIS)

    1991-07-01

    The Transport Safety Research Abstracts is a collection of reports from Member States of the International Atomic Energy Agency, and other international organizations on research in progress or just completed in the area of safe transport of radioactive material. The main aim of TSRA is to draw attention to work that is about to be published, thus enabling interested parties to obtain further information through direct correspondence with the investigators. Information contained in this issue covers work being undertaken in 6 Member States and contracted by 1 international organization; it is hoped with succeeding issues that TSRA will be able to widen this base. TSRA is modelled after other IAEA publications describing work in progress in other programme areas, namely Health Physics Research Abstracts (No. 14 was published in 1989), Waste Management Research Abstracts (No. 20 was published in 1990), and Nuclear Safety Research Abstracts (No. 2 was published in 1990)