WorldWideScience

Sample records for previously learned codes

  1. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  2. Code-specific learning rules improve action selection by populations of spiking neurons.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2014-08-01

    Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

  3. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  4. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  5. Learning to Estimate Dynamical State with Probabilistic Population Codes.

    Directory of Open Access Journals (Sweden)

    Joseph G Makin

    2015-11-01

    Full Text Available Tracking moving objects, including one's own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF, the parameters of which can be learned via latent-variable density estimation (the EM algorithm. The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, "probabilistic population codes." We show that a recurrent neural network-a modified form of an exponential family harmonium (EFH-that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.

  6. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  7. An Eye-Tracking Study of How Color Coding Affects Multimedia Learning

    Science.gov (United States)

    Ozcelik, Erol; Karakus, Turkan; Kursun, Engin; Cagiltay, Kursat

    2009-01-01

    Color coding has been proposed to promote more effective learning. However, insufficient evidence currently exists to show how color coding leads to better learning. The goal of this study was to investigate the underlying cause of the color coding effect by utilizing eye movement data. Fifty-two participants studied either a color-coded or…

  8. Side Information and Noise Learning for Distributed Video Coding using Optical Flow and Clustering

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Rakêt, Lars Lau; Huang, Xin

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The coding efficiency of DVC critically depends on the quality of side information generation and accuracy of noise modeling. This paper considers...... Transform Domain Wyner-Ziv (TDWZ) coding and proposes using optical flow to improve side information generation and clustering to improve noise modeling. The optical flow technique is exploited at the decoder side to compensate weaknesses of block based methods, when using motion-compensation to generate...... side information frames. Clustering is introduced to capture cross band correlation and increase local adaptivity in the noise modeling. This paper also proposes techniques to learn from previously decoded (WZ) frames. Different techniques are combined by calculating a number of candidate soft side...

  9. Learning of spatio-temporal codes in a coupled oscillator system.

    Science.gov (United States)

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  10. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan; Cui, Xuefeng; Yu, Ge; Guo, Lili; Gao, Xin

    2017-01-01

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays

  11. Real-time Color Codes for Assessing Learning Process

    OpenAIRE

    Dzelzkalēja, L; Kapenieks, J

    2016-01-01

    Effective assessment is an important way for improving the learning process. There are existing guidelines for assessing the learning process, but they lack holistic digital knowledge society considerations. In this paper the authors propose a method for real-time evaluation of students’ learning process and, consequently, for quality evaluation of teaching materials both in the classroom and in the distance learning environment. The main idea of the proposed Color code method (CCM) is to use...

  12. On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.

    Science.gov (United States)

    Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba

    2013-01-01

    Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.

  13. Blending Classroom Teaching and Learning with QR Codes

    Science.gov (United States)

    Rikala, Jenni; Kankaanranta, Marja

    2014-01-01

    The aim of this case study was to explore the feasibility of the Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The interest was especially to explore how mobile devices and QR codes can enhance and blend teaching and learning. The data were collected with a teacher interview and pupil surveys. The learning…

  14. Analysis of previous perceptual and motor experience in breaststroke kick learning

    Directory of Open Access Journals (Sweden)

    Ried Bettina

    2015-12-01

    Full Text Available One of the variables that influence motor learning is the learner’s previous experience, which may provide perceptual and motor elements to be transferred to a novel motor skill. For swimming skills, several motor experiences may prove effective. Purpose. The aim was to analyse the influence of previous experience in playing in water, swimming lessons, and music or dance lessons on learning the breaststroke kick. Methods. The study involved 39 Physical Education students possessing basic swimming skills, but not the breaststroke, who performed 400 acquisition trials followed by 50 retention and 50 transfer trials, during which stroke index as well as rhythmic and spatial configuration indices were mapped, and answered a yes/no questionnaire regarding previous experience. Data were analysed by ANOVA (p = 0.05 and the effect size (Cohen’s d ≥0.8 indicating large effect size. Results. The whole sample improved their stroke index and spatial configuration index, but not their rhythmic configuration index. Although differences between groups were not significant, two types of experience showed large practical effects on learning: childhood water playing experience only showed major practically relevant positive effects, and no experience in any of the three fields hampered the learning process. Conclusions. The results point towards diverse impact of previous experience regarding rhythmic activities, swimming lessons, and especially with playing in water during childhood, on learning the breaststroke kick.

  15. Using QR Codes to Differentiate Learning for Gifted and Talented Students

    Science.gov (United States)

    Siegle, Del

    2015-01-01

    QR codes are two-dimensional square patterns that are capable of coding information that ranges from web addresses to links to YouTube video. The codes save time typing and eliminate errors in entering addresses incorrectly. These codes make learning with technology easier for students and motivationally engage them in news ways.

  16. Predicting intraindividual changes in learning strategies: The effects of previous achievement

    OpenAIRE

    Buško, Vesna; Mujagić, Amela

    2013-01-01

    Socio-cognitive models of self-regulated learning (e.g., Pintrich, 2000) emphasize contextualized nature oflearning process, and within-person variation in learning processes, along with between-person variability in selfregulation.Previous studies about contextual nature of learning strategies have mostly focused on the effects ofdifferent contextual factors on interindividual differences in learning strategies utilization. However, less attentionwas given to the question about contextual ef...

  17. Learning Recruits Neurons Representing Previously Established Associations in the Corvid Endbrain.

    Science.gov (United States)

    Veit, Lena; Pidpruzhnykova, Galyna; Nieder, Andreas

    2017-10-01

    Crows quickly learn arbitrary associations. As a neuronal correlate of this behavior, single neurons in the corvid endbrain area nidopallium caudolaterale (NCL) change their response properties during association learning. In crows performing a delayed association task that required them to map both familiar and novel sample pictures to the same two choice pictures, NCL neurons established a common, prospective code for associations. Here, we report that neuronal tuning changes during learning were not distributed equally in the recorded population of NCL neurons. Instead, such learning-related changes relied almost exclusively on neurons which were already encoding familiar associations. Only in such neurons did behavioral improvements during learning of novel associations coincide with increasing selectivity over the learning process. The size and direction of selectivity for familiar and newly learned associations were highly correlated. These increases in selectivity for novel associations occurred only late in the delay period. Moreover, NCL neurons discriminated correct from erroneous trial outcome based on feedback signals at the end of the trial, particularly in newly learned associations. Our results indicate that task-relevant changes during association learning are not distributed within the population of corvid NCL neurons but rather are restricted to a specific group of association-selective neurons. Such association neurons in the multimodal cognitive integration area NCL likely play an important role during highly flexible behavior in corvids.

  18. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  19. A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding

    Science.gov (United States)

    Cuevas, Joshua; Dawson, Bryan L.

    2018-01-01

    This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…

  20. Abstract feature codes: The building blocks of the implicit learning system.

    Science.gov (United States)

    Eberhardt, Katharina; Esser, Sarah; Haider, Hilde

    2017-07-01

    According to the Theory of Event Coding (TEC; Hommel, Müsseler, Aschersleben, & Prinz, 2001), action and perception are represented in a shared format in the cognitive system by means of feature codes. In implicit sequence learning research, it is still common to make a conceptual difference between independent motor and perceptual sequences. This supposedly independent learning takes place in encapsulated modules (Keele, Ivry, Mayr, Hazeltine, & Heuer 2003) that process information along single dimensions. These dimensions have remained underspecified so far. It is especially not clear whether stimulus and response characteristics are processed in separate modules. Here, we suggest that feature dimensions as they are described in the TEC should be viewed as the basic content of modules of implicit learning. This means that the modules process all stimulus and response information related to certain feature dimensions of the perceptual environment. In 3 experiments, we investigated by means of a serial reaction time task the nature of the basic units of implicit learning. As a test case, we used stimulus location sequence learning. The results show that a stimulus location sequence and a response location sequence cannot be learned without interference (Experiment 2) unless one of the sequences can be coded via an alternative, nonspatial dimension (Experiment 3). These results support the notion that spatial location is one module of the implicit learning system and, consequently, that there are no separate processing units for stimulus versus response locations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Teaching Qualitative Research: Experiential Learning in Group-Based Interviews and Coding Assignments

    Science.gov (United States)

    DeLyser, Dydia; Potter, Amy E.

    2013-01-01

    This article describes experiential-learning approaches to conveying the work and rewards involved in qualitative research. Seminar students interviewed one another, transcribed or took notes on those interviews, shared those materials to create a set of empirical materials for coding, developed coding schemes, and coded the materials using those…

  2. Evaluating QR Code Case Studies Using a Mobile Learning Framework

    Science.gov (United States)

    Rikala, Jenni

    2014-01-01

    The aim of this study was to evaluate the feasibility of Quick Response (QR) codes and mobile devices in the context of Finnish basic education. The feasibility was analyzed through a mobile learning framework, which includes the core characteristics of mobile learning. The study is part of a larger research where the aim is to develop a…

  3. Development of Learning Management in Moral Ethics and Code of Ethics of the Teaching Profession Course

    Science.gov (United States)

    Boonsong, S.; Siharak, S.; Srikanok, V.

    2018-02-01

    The purposes of this research were to develop the learning management, which was prepared for the enhancement of students’ Moral Ethics and Code of Ethics in Rajamangala University of Technology Thanyaburi (RMUTT). The contextual study and the ideas for learning management development was conducted by the document study, focus group method and content analysis from the document about moral ethics and code of ethics of the teaching profession concerning Graduate Diploma for Teaching Profession Program. The main tools of this research were the summarize papers and analyse papers. The results of development showed the learning management for the development of moral ethics and code of ethics of the teaching profession for Graduate Diploma for Teaching Profession students could promote desired moral ethics and code of ethics of the teaching profession character by the integrated learning techniques which consisted of Service Learning, Contract System, Value Clarification, Role Playing, and Concept Mapping. The learning management was presented in 3 steps.

  4. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  5. Analysis of Memory Codes and Cumulative Rehearsal in Observational Learning

    Science.gov (United States)

    Bandura, Albert; And Others

    1974-01-01

    The present study examined the influence of memory codes varying in meaningfulness and retrievability and cumulative rehearsal on retention of observationally learned responses over increasing temporal intervals. (Editor)

  6. Quick Response (QR) Codes for Audio Support in Foreign Language Learning

    Science.gov (United States)

    Vigil, Kathleen Murray

    2017-01-01

    This study explored the potential benefits and barriers of using quick response (QR) codes as a means by which to provide audio materials to middle-school students learning Spanish as a foreign language. Eleven teachers of Spanish to middle-school students created transmedia materials containing QR codes linking to audio resources. Students…

  7. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  8. WITHDRAWAL OF PREVIOUS COMPLAINT. A COMPARISON OF THE OLD AND THE NEW CRIMINAL CODE. PROBLEMS OF COMPARATIVE LAW

    Directory of Open Access Journals (Sweden)

    Alin Sorin NICOLESCU

    2015-07-01

    Full Text Available In criminal law previous complaint has a double legal valence, material and procedural in nature, constituting a condition for criminal liability, but also a functional condition in cases expressly and limitatively provided by law, a consequence of criminal sanction condition. For certain offenses criminal law determines the initiation of the criminal complaint by the introduction of previous complaint by the injured party, without its absence being a question of removing criminal liability. From the perspective of criminal material law conditioning of the existence of previous complaint ,its lack and withdrawal, are regulated by art. 157 and 158 of the New Penal Code, with significant changes in relation to the old regulation of the institution . In terms of procedural aspect , previous complaint is regulated in art. 295-298 of the New Code of Criminal Procedure. Regarding the withdrawal of the previuos complaint, in the case of offenses for which the initiation of criminal proceedings is subject to the existence of such a complaint, we note that in the current Criminal Code this legal institution is regulated separately, representing both a cause for removal of criminal liability and a cause that preclude criminal action. This unilateral act of the will of the injured party - the withdrawal of the previous complaint, may be exercised only under certain conditions, namely: it can only be promoted in the case of the offenses for which the initiation of criminal proceedings is subject to the introduction of a previous complaint; it is made exclusively by the rightholder, by legal representatives or with the consent of the persons required by law for persons lacking legal capacity or having limited legal capacity;it must intervene until giving final judgment and it must represent an express and explicit manifestation. A novelty isrepresented by the possibility of withdrawing previous complaint if the prosecution was driven ex officio, although for

  9. Supporting Situated Learning Based on QR Codes with Etiquetar App: A Pilot Study

    Science.gov (United States)

    Camacho, Miguel Olmedo; Pérez-Sanagustín, Mar; Alario-Hoyos, Carlos; Soldani, Xavier; Kloos, Carlos Delgado; Sayago, Sergio

    2014-01-01

    EtiquetAR is an authoring tool for supporting the design and enactment of situated learning experiences based on QR tags. Practitioners use etiquetAR for creating, managing and personalizing collections of QR codes with special properties: (1) codes can have more than one link pointing at different multimedia resources, (2) codes can be updated…

  10. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  11. [Transposition errors during learning to reproduce a sequence by the right- and the left-hand movements: simulation of positional and movement coding].

    Science.gov (United States)

    Liakhovetskiĭ, V A; Bobrova, E V; Skopin, G N

    2012-01-01

    Transposition errors during the reproduction of a hand movement sequence make it possible to receive important information on the internal representation of this sequence in the motor working memory. Analysis of such errors showed that learning to reproduce sequences of the left-hand movements improves the system of positional coding (coding ofpositions), while learning of the right-hand movements improves the system of vector coding (coding of movements). Learning of the right-hand movements after the left-hand performance involved the system of positional coding "imposed" by the left hand. Learning of the left-hand movements after the right-hand performance activated the system of vector coding. Transposition errors during learning to reproduce movement sequences can be explained by neural network using either vector coding or both vector and positional coding.

  12. Code-Switching Functions in Modern Hebrew Teaching and Learning

    Science.gov (United States)

    Gilead, Yona

    2016-01-01

    The teaching and learning of Modern Hebrew outside of Israel is essential to Jewish education and identity. One of the most contested issues in Modern Hebrew pedagogy is the use of code-switching between Modern Hebrew and learners' first language. Moreover, this is one of the longest running disputes in the broader field of second language…

  13. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  14. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  15. Imitation Learning Based on an Intrinsic Motivation Mechanism for Efficient Coding

    Directory of Open Access Journals (Sweden)

    Jochen eTriesch

    2013-11-01

    Full Text Available A hypothesis regarding the development of imitation learning is presented that is rooted in intrinsic motivations. It is derived from a recently proposed form of intrinsically motivated learning (IML for efficient coding in active perception, wherein an agent learns to perform actions with its sense organs to facilitate efficient encoding of the sensory data. To this end, actions of the sense organs that improve the encoding of the sensory data trigger an internally generated reinforcement signal. Here it is argued that the same IML mechanism might also support the development of imitation when general actions beyond those of the sense organs are considered: The learner first observes a tutor performing a behavior and learns a model of the the behavior's sensory consequences. The learner then acts itself and receives an internally generated reinforcement signal reflecting how well the sensory consequences of its own behavior are encoded by the sensory model. Actions that are more similar to those of the tutor will lead to sensory signals that are easier to encode and produce a higher reinforcement signal. Through this, the learner's behavior is progressively tuned to make the sensory consequences of its actions match the learned sensory model. I discuss this mechanism in the context of human language acquisition and bird song learning where similar ideas have been proposed. The suggested mechanism also offers an account for the development of mirror neurons and makes a number of predictions. Overall, it establishes a connection between principles of efficient coding, intrinsic motivations and imitation.

  16. Deep Learning Methods for Improved Decoding of Linear Codes

    Science.gov (United States)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  17. Towards Automatic Learning of Heuristics for Mechanical Transformations of Procedural Code

    Directory of Open Access Journals (Sweden)

    Guillermo Vigueras

    2017-01-01

    Full Text Available The current trends in next-generation exascale systems go towards integrating a wide range of specialized (co-processors into traditional supercomputers. Due to the efficiency of heterogeneous systems in terms of Watts and FLOPS per surface unit, opening the access of heterogeneous platforms to a wider range of users is an important problem to be tackled. However, heterogeneous platforms limit the portability of the applications and increase development complexity due to the programming skills required. Program transformation can help make programming heterogeneous systems easier by defining a step-wise transformation process that translates a given initial code into a semantically equivalent final code, but adapted to a specific platform. Program transformation systems require the definition of efficient transformation strategies to tackle the combinatorial problem that emerges due to the large set of transformations applicable at each step of the process. In this paper we propose a machine learning-based approach to learn heuristics to define program transformation strategies. Our approach proposes a novel combination of reinforcement learning and classification methods to efficiently tackle the problems inherent to this type of systems. Preliminary results demonstrate the suitability of this approach.

  18. "My math and me": Nursing students' previous experiences in learning mathematics.

    Science.gov (United States)

    Røykenes, Kari

    2016-01-01

    In this paper, 11 narratives about former experiences in learning of mathematics written by nursing students are thematically analyzed. Most students had a positive relationship with the subject in primary school, when they found mathematics fun and were able to master the subject. For some, a change occurred in the transition to lower secondary school. The reasons for this change was found in the subject (increased difficulty), the teachers (movement of teachers, numerous substitute teachers), the class environment and size (many pupils, noise), and the student him- or herself (silent and anonymous pupil). This change was also found in the transition from lower to higher secondary school. By contrast, some students had experienced changes that were positive, and their mathematics teacher was a significant factor in this positive change. The paper emphasizes the importance of previous experiences in learning mathematics to nursing students when learning about drug calculation. Copyright © 2015. Published by Elsevier Ltd.

  19. Previous experience in manned space flight: A survey of human factors lessons learned

    Science.gov (United States)

    Chandlee, George O.; Woolford, Barbara

    1993-01-01

    Previous experience in manned space flight programs can be used to compile a data base of human factors lessons learned for the purpose of developing aids in the future design of inhabited spacecraft. The objectives are to gather information available from relevant sources, to develop a taxonomy of human factors data, and to produce a data base that can be used in the future for those people involved in the design of manned spacecraft operations. A study is currently underway at the Johnson Space Center with the objective of compiling, classifying, and summarizing relevant human factors data bearing on the lessons learned from previous manned space flights. The research reported defines sources of data, methods for collection, and proposes a classification for human factors data that may be a model for other human factors disciplines.

  20. The use of QR Code as a learning technology: an exploratory study

    Directory of Open Access Journals (Sweden)

    Stefano Besana

    2010-12-01

    Full Text Available This paper discusses a pilot study on the potential benefits of QR (Quick Response Codes as a tool for facilitating and enhancing learning processes. An analysis is given of the strengths and added value of QR technologies applied to museum visits, with precautions regarding the design of learning environments like the one presented. Some possible future scenarios are identified for implementing these technologies in contexts more strictly related to teaching and education.

  1. Stitching Codeable Circuits: High School Students' Learning About Circuitry and Coding with Electronic Textiles

    Science.gov (United States)

    Litts, Breanne K.; Kafai, Yasmin B.; Lui, Debora A.; Walker, Justice T.; Widman, Sari A.

    2017-10-01

    Learning about circuitry by connecting a battery, light bulb, and wires is a common activity in many science classrooms. In this paper, we expand students' learning about circuitry with electronic textiles, which use conductive thread instead of wires and sewable LEDs instead of lightbulbs, by integrating programming sensor inputs and light outputs and examining how the two domains interact. We implemented an electronic textiles unit with 23 high school students ages 16-17 years who learned how to craft and code circuits with the LilyPad Arduino, an electronic textile construction kit. Our analyses not only confirm significant increases in students' understanding of functional circuits but also showcase students' ability in designing and remixing program code for controlling circuits. In our discussion, we address opportunities and challenges of introducing codeable circuit design for integrating maker activities that include engineering and computing into classrooms.

  2. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  3. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  4. Accuracy comparison among different machine learning techniques for detecting malicious codes

    Science.gov (United States)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  5. Code-switching as a communication, learning, and social negotiation stategy in first-year learners of Danish

    DEFF Research Database (Denmark)

    Arnfast, Juni Søderberg; Jørgensen, Jens Normann

    2003-01-01

    The term code-switching is used in two related, yet different fields of linguistics: Second Language Acquisition and bilinguals studies. In the former code-switching is analyzed in terms of learning strategies, whereas the latter applies the competence view.The present paper intends to detect the...

  6. A computational study on altered theta-gamma coupling during learning and phase coding.

    Directory of Open Access Journals (Sweden)

    Xuejuan Zhang

    Full Text Available There is considerable interest in the role of coupling between theta and gamma oscillations in the brain in the context of learning and memory. Here we have used a neural network model which is capable of producing coupling of theta phase to gamma amplitude firstly to explore its ability to reproduce reported learning changes and secondly to memory-span and phase coding effects. The spiking neural network incorporates two kinetically different GABA(A receptor-mediated currents to generate both theta and gamma rhythms and we have found that by selective alteration of both NMDA receptors and GABA(A,slow receptors it can reproduce learning-related changes in the strength of coupling between theta and gamma either with or without coincident changes in theta amplitude. When the model was used to explore the relationship between theta and gamma oscillations, working memory capacity and phase coding it showed that the potential storage capacity of short term memories, in terms of nested gamma-subcycles, coincides with the maximal theta power. Increasing theta power is also related to the precision of theta phase which functions as a potential timing clock for neuronal firing in the cortex or hippocampus.

  7. DeepNet: An Ultrafast Neural Learning Code for Seismic Imaging

    International Nuclear Information System (INIS)

    Barhen, J.; Protopopescu, V.; Reister, D.

    1999-01-01

    A feed-forward multilayer neural net is trained to learn the correspondence between seismic data and well logs. The introduction of a virtual input layer, connected to the nominal input layer through a special nonlinear transfer function, enables ultrafast (single iteration), near-optimal training of the net using numerical algebraic techniques. A unique computer code, named DeepNet, has been developed, that has achieved, in actual field demonstrations, results unattainable to date with industry standard tools

  8. The Role of Architectural and Learning Constraints in Neural Network Models: A Case Study on Visual Space Coding.

    Science.gov (United States)

    Testolin, Alberto; De Filippo De Grazia, Michele; Zorzi, Marco

    2017-01-01

    The recent "deep learning revolution" in artificial neural networks had strong impact and widespread deployment for engineering applications, but the use of deep learning for neurocomputational modeling has been so far limited. In this article we argue that unsupervised deep learning represents an important step forward for improving neurocomputational models of perception and cognition, because it emphasizes the role of generative learning as opposed to discriminative (supervised) learning. As a case study, we present a series of simulations investigating the emergence of neural coding of visual space for sensorimotor transformations. We compare different network architectures commonly used as building blocks for unsupervised deep learning by systematically testing the type of receptive fields and gain modulation developed by the hidden neurons. In particular, we compare Restricted Boltzmann Machines (RBMs), which are stochastic, generative networks with bidirectional connections trained using contrastive divergence, with autoencoders, which are deterministic networks trained using error backpropagation. For both learning architectures we also explore the role of sparse coding, which has been identified as a fundamental principle of neural computation. The unsupervised models are then compared with supervised, feed-forward networks that learn an explicit mapping between different spatial reference frames. Our simulations show that both architectural and learning constraints strongly influenced the emergent coding of visual space in terms of distribution of tuning functions at the level of single neurons. Unsupervised models, and particularly RBMs, were found to more closely adhere to neurophysiological data from single-cell recordings in the primate parietal cortex. These results provide new insights into how basic properties of artificial neural networks might be relevant for modeling neural information processing in biological systems.

  9. Software Quality and Security in Teachers' and Students' Codes When Learning a New Programming Language

    Directory of Open Access Journals (Sweden)

    Arnon Hershkovitz

    2015-09-01

    Full Text Available In recent years, schools (as well as universities have added cyber security to their computer science curricula. This topic is still new for most of the current teachers, who would normally have a standard computer science background. Therefore the teachers are trained and then teaching their students what they have just learned. In order to explore differences in both populations’ learning, we compared measures of software quality and security between high-school teachers and students. We collected 109 source files, written in Python by 18 teachers and 31 students, and engineered 32 features, based on common standards for software quality (PEP 8 and security (derived from CERT Secure Coding Standards. We use a multi-view, data-driven approach, by (a using hierarchical clustering to bottom-up partition the population into groups based on their code-related features and (b building a decision tree model that predicts whether a student or a teacher wrote a given code (resulting with a LOOCV kappa of 0.751. Overall, our findings suggest that the teachers’ codes have a better quality than the students’ – with a sub-group of the teachers, mostly males, demonstrate better coding than their peers and the students – and that the students’ codes are slightly better secured than the teachers’ codes (although both populations show very low security levels. The findings imply that teachers might benefit from their prior knowledge and experience, but also emphasize the lack of continuous involvement of some of the teachers with code-writing. Therefore, findings shed light on computer science teachers as lifelong learners. Findings also highlight the difference between quality and security in today’s programming paradigms. Implications for these findings are discussed.

  10. Using supervised machine learning to code policy issues: Can classifiers generalize across contexts?

    NARCIS (Netherlands)

    Burscher, B.; Vliegenthart, R.; de Vreese, C.H.

    2015-01-01

    Content analysis of political communication usually covers large amounts of material and makes the study of dynamics in issue salience a costly enterprise. In this article, we present a supervised machine learning approach for the automatic coding of policy issues, which we apply to news articles

  11. Code to Learn: Where Does It Belong in the K-12 Curriculum?

    Science.gov (United States)

    Moreno-León, Jesús; Robles, Gregorio; Román-González, Marcos

    2016-01-01

    The introduction of computer programming in K-12 has become mainstream in the last years, as countries around the world are making coding part of their curriculum. Nevertheless, there is a lack of empirical studies that investigate how learning to program at an early age affects other school subjects. In this regard, this paper compares three…

  12. Predictive coding accelerates word recognition and learning in the early stages of language development.

    Science.gov (United States)

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  13. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  14. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  15. Programming Entity Framework Code First

    CERN Document Server

    Lerman, Julia

    2011-01-01

    Take advantage of the Code First data modeling approach in ADO.NET Entity Framework, and learn how to build and configure a model based on existing classes in your business domain. With this concise book, you'll work hands-on with examples to learn how Code First can create an in-memory model and database by default, and how you can exert more control over the model through further configuration. Code First provides an alternative to the database first and model first approaches to the Entity Data Model. Learn the benefits of defining your model with code, whether you're working with an exis

  16. Effects of Mode of Target Task Selection on Learning about Plants in a Mobile Learning Environment: Effortful Manual Selection versus Effortless QR-Code Selection

    Science.gov (United States)

    Gao, Yuan; Liu, Tzu-Chien; Paas, Fred

    2016-01-01

    This study compared the effects of effortless selection of target plants using quick respond (QR) code technology to effortful manual search and selection of target plants on learning about plants in a mobile device supported learning environment. In addition, it was investigated whether the effectiveness of the 2 selection methods was…

  17. Impact of School Uniforms on Student Discipline and the Learning Climate: A Comparative Case Study of Two Middle Schools with Uniform Dress Codes and Two Middle Schools without Uniform Dress Codes

    Science.gov (United States)

    Dulin, Charles Dewitt

    2016-01-01

    The purpose of this research is to evaluate the impact of uniform dress codes on a school's climate for student behavior and learning in four middle schools in North Carolina. The research will compare the perceptions of parents, teachers, and administrators in schools with uniform dress codes against schools without uniform dress codes. This…

  18. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  19. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  20. Reframing the Principle of Specialisation in Legitimation Code Theory: A Blended Learning Perspective

    Science.gov (United States)

    Owusu-Agyeman, Yaw; Larbi-Siaw, Otu

    2017-01-01

    This study argues that in developing a robust framework for students in a blended learning environment, Structural Alignment (SA) becomes the third principle of specialisation in addition to Epistemic Relation (ER) and Social Relation (SR). We provide an extended code: (ER+/-, SR+/-, SA+/-) that present strong classification and framing to the…

  1. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    Science.gov (United States)

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  2. Teacher Candidates Implementing Universal Design for Learning: Enhancing Picture Books with QR Codes

    Science.gov (United States)

    Grande, Marya; Pontrello, Camille

    2016-01-01

    The purpose of this study was to investigate if teacher candidates could gain knowledge of the principles of Universal Design for Learning by enhancing traditional picture books with Quick Response (QR) codes and to determine if the process of making these enhancements would impact teacher candidates' comfort levels with using technology on both…

  3. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  4. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J M; Kodeli, I; Menard, St; Bouchet, J L; Renard, F; Martin, E; Blazy, L; Voros, S; Bochud, F; Laedermann, J P; Beaugelin, K; Makovicka, L; Quiot, A; Vermeersch, F; Roche, H; Perrin, M C; Laye, F; Bardies, M; Struelens, L; Vanhavere, F; Gschwind, R; Fernandez, F; Quesne, B; Fritsch, P; Lamart, St; Crovisier, Ph; Leservot, A; Antoni, R; Huet, Ch; Thiam, Ch; Donadille, L; Monfort, M; Diop, Ch; Ricard, M

    2006-07-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  5. Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling

    OpenAIRE

    Tong, Tong; Wolz, Robin; Coupe, Pierrick; Hajnal, Joseph V.; Rueckert, Daniel

    2013-01-01

    International audience; We propose a novel method for the automatic segmentation of brain MRI images by using discriminative dictionary learning and sparse coding techniques. In the proposed method, dictionaries and classifiers are learned simultaneously from a set of brain atlases, which can then be used for the reconstruction and segmentation of an unseen target image. The proposed segmentation strategy is based on image reconstruction, which is in contrast to most existing atlas-based labe...

  6. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  7. [Learning virtual routes: what does verbal coding do in working memory?].

    Science.gov (United States)

    Gyselinck, Valérie; Grison, Élise; Gras, Doriane

    2015-03-01

    Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  8. Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario

    Directory of Open Access Journals (Sweden)

    Anton Öttl

    2017-09-01

    Full Text Available The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors, participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress.

  9. QR CODES IN EDUCATION AND COMMUNICATION

    Directory of Open Access Journals (Sweden)

    Gurhan DURAK

    2016-04-01

    Full Text Available Technological advances brought applications of innovations to education. Conventional education increasingly flourishes with new technologies accompanied by more learner active environments. In this continuum, there are learners preferring self-learning. Traditional learning materials yield attractive, motivating and technologically enhanced learning materials. The QR (Quick Response Codes are one of these innovations. The aim of this study is to redesign a lesson unit supported with QR Codes and to get the learner views about the redesigned material. For this purpose, the redesigned lesson unit was delivered to 15 learners in Balıkesir University in the academic year of 2013-2014. The learners were asked to study the material. The learners who had smart phones and Internet access were chosen for the study. To provide sectional diversity, three groups were created. The group learners were from Faculty of Education, Faculty of Science and Literature and Faculty of Engineering. After the semi-structured interviews were held, the learners were asked about their pre-knowledge about QR Codes, QR Codes’ contribution to learning, difficulties with using QR Codes about and design issues. Descriptive data analysis was used in the study. The findings were interpreted on the basis of Theory of Diffusion of Innovations and Theory of Uses and Gratifications. After the research, the themes found were awareness of QR Code, types of QR Codes and applications, contributions to learning, and proliferation of QR Codes. Generally, the learners participating in the study reported that they were aware of QR Codes; that they could use the QR Codes; and that using QR Codes in education was useful. They also expressed that such features as visual elements, attractiveness and direct routing had positive impact on learning. In addition, they generally mentioned that they did not have any difficulty using QR Codes; that they liked the design; and that the content should

  10. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.; Bensmail, H.; Yao, N.; Gao, Xin

    2013-01-01

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  11. Discriminative sparse coding on multi-manifolds

    KAUST Repository

    Wang, J.J.-Y.

    2013-09-26

    Sparse coding has been popularly used as an effective data representation method in various applications, such as computer vision, medical imaging and bioinformatics. However, the conventional sparse coding algorithms and their manifold-regularized variants (graph sparse coding and Laplacian sparse coding), learn codebooks and codes in an unsupervised manner and neglect class information that is available in the training set. To address this problem, we propose a novel discriminative sparse coding method based on multi-manifolds, that learns discriminative class-conditioned codebooks and sparse codes from both data feature spaces and class labels. First, the entire training set is partitioned into multiple manifolds according to the class labels. Then, we formulate the sparse coding as a manifold-manifold matching problem and learn class-conditioned codebooks and codes to maximize the manifold margins of different classes. Lastly, we present a data sample-manifold matching-based strategy to classify the unlabeled data samples. Experimental results on somatic mutations identification and breast tumor classification based on ultrasonic images demonstrate the efficacy of the proposed data representation and classification approach. 2013 The Authors. All rights reserved.

  12. QR Codes as Mobile Learning Tools for Labor Room Nurses at the San Pablo Colleges Medical Center

    Science.gov (United States)

    Del Rosario-Raymundo, Maria Rowena

    2017-01-01

    Purpose: The purpose of this paper is to explore the use of QR codes as mobile learning tools and examine factors that impact on their usefulness, acceptability and feasibility in assisting the nurses' learning. Design/Methodology/Approach: Study participants consisted of 14 regular, full-time, board-certified LR nurses. Over a two-week period,…

  13. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

    Science.gov (United States)

    Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

    2016-01-01

    Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

  14. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  15. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  16. Learning Heroku Postgres

    CERN Document Server

    Espake, Patrick

    2015-01-01

    Learning Heroku Postgres is targeted at developers and database admins. Even if you're new to Heroku Postgres, you'll be able to master both the basic as well as advanced features of Heroku Postgres. Since Heroku Postgres is incredibly user-friendly, no previous experience in computer coding or programming is required.

  17. The Effects of Single and Dual Coded Multimedia Instructional Methods on Chinese Character Learning

    Science.gov (United States)

    Wang, Ling

    2013-01-01

    Learning Chinese characters is a difficult task for adult English native speakers due to the significant differences between the Chinese and English writing system. The visuospatial properties of Chinese characters have inspired the development of instructional methods using both verbal and visual information based on the Dual Coding Theory. This…

  18. Neural coding of basic reward terms of animal learning theory, game theory, microeconomics and behavioural ecology.

    Science.gov (United States)

    Schultz, Wolfram

    2004-04-01

    Neurons in a small number of brain structures detect rewards and reward-predicting stimuli and are active during the expectation of predictable food and liquid rewards. These neurons code the reward information according to basic terms of various behavioural theories that seek to explain reward-directed learning, approach behaviour and decision-making. The involved brain structures include groups of dopamine neurons, the striatum including the nucleus accumbens, the orbitofrontal cortex and the amygdala. The reward information is fed to brain structures involved in decision-making and organisation of behaviour, such as the dorsolateral prefrontal cortex and possibly the parietal cortex. The neural coding of basic reward terms derived from formal theories puts the neurophysiological investigation of reward mechanisms on firm conceptual grounds and provides neural correlates for the function of rewards in learning, approach behaviour and decision-making.

  19. A Dual Coding View of Vocabulary Learning

    Science.gov (United States)

    Sadoski, Mark

    2005-01-01

    A theoretical perspective on acquiring sight vocabulary and developing meaningful vocabulary is presented. Dual Coding Theory assumes that cognition occurs in two independent but connected codes: a verbal code for language and a nonverbal code for mental imagery. The mixed research literature on using pictures in teaching sight vocabulary is…

  20. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  1. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  2. Code to Learn: Where Does It Belong in the K-12 Curriculum?

    OpenAIRE

    Jesús Moreno León; Gregorio Robles; Marcos Román-González

    2016-01-01

    The introduction of computer programming in K-12 has become mainstream in the last years, as countries around the world are making coding part of their curriculum. Nevertheless, there is a lack of empirical studies that investigate how learning to program at an early age affects other school subjects. In this regard, this paper compares three quasi-experimental research designs conducted in three different schools (n=129 students from 2nd and 6th grade), in order to assess the impact of intro...

  3. Examining the Role of Orthographic Coding Ability in Elementary Students with Previously Identified Reading Disability, Speech or Language Impairment, or Comorbid Language and Learning Disabilities

    Science.gov (United States)

    Haugh, Erin Kathleen

    2017-01-01

    The purpose of this study was to examine the role orthographic coding might play in distinguishing between membership in groups of language-based disability types. The sample consisted of 36 second and third-grade subjects who were administered the PAL-II Receptive Coding and Word Choice Accuracy subtest as a measure of orthographic coding…

  4. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    Science.gov (United States)

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  5. THE INFLUENCE OF THE ASSESSMENT MODEL AND METHOD TOWARD THE SCIENCE LEARNING ACHIEVEMENT BY CONTROLLING THE STUDENTS? PREVIOUS KNOWLEDGE OF MATHEMATICS.

    OpenAIRE

    Adam rumbalifar; I. g. n. Agung; Burhanuddin tola.

    2018-01-01

    This research aims to study the influence of the assessment model and method toward the science learning achievement by controlling the students? previous knowledge of mathematics. This study was conducted at SMP East Seram district with the population of 295 students. This study applied a quasi-experimental method with 2 X 2 factorial design using the ANCOVA model. The findings after controlling the students\\' previous knowledge of mathematics show that the science learning achievement of th...

  6. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  7. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  8. Computer codes in nuclear safety, radiation transport and dosimetry

    International Nuclear Information System (INIS)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M.

    2006-01-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations

  9. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  10. Code to Learn: Where Does It Belong in the K-12 Curriculum?

    Directory of Open Access Journals (Sweden)

    Jesús Moreno León

    2016-06-01

    Full Text Available The introduction of computer programming in K-12 has become mainstream in the last years, as countries around the world are making coding part of their curriculum. Nevertheless, there is a lack of empirical studies that investigate how learning to program at an early age affects other school subjects. In this regard, this paper compares three quasi-experimental research designs conducted in three different schools (n=129 students from 2nd and 6th grade, in order to assess the impact of introducing programming with Scratch at different stages and in several subjects. While both 6th grade experimental groups working with coding activities showed a statistically significant improvement in terms of academic performance, this was not the case in the 2nd grade classroom. Notable disparity was also found regarding the subject in which the programming activities were included, as in social studies the effect size was double that in mathematics.

  11. Libraries as Facilitators of Coding for All

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    Learning to code has been an increasingly frequent topic of conversation both in academic circles and popular media. Learning to code recently received renewed attention with the announcement of the White House's Computer Science for All initiative (Smith 2016). This initiative intends "to empower all American students from kindergarten…

  12. Amino acid codes in mitochondria as possible clues to primitive codes

    Science.gov (United States)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  13. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  14. Learning through reactions

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2007-01-01

    Universities can from the student?s point of view be seen as places of learning an explicit curriculum of a particular discipline. From a fieldwork among physicist students at the Niels Bohr Institute in Denmark, I argue that the learning of cultural code-curricula in higher educational...... institutions designate in ambiguous ways. I argue claim that students also have to learn institutional cultural codes, which are not the explicit curricula presented in textbooks, but a socially designated cultural code-curricula learned through everyday interactions at the university institutes. I further...... argue that this code-curriculum is learned through what I shall term indefinite learning processes, which are mainly pre-discursive to the newcomer...

  15. An Evaluation of Automated Code Generation with the PetriCode Approach

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge

    2014-01-01

    Automated code generation is an important element of model driven development methodologies. We have previously proposed an approach for code generation based on Coloured Petri Net models annotated with textual pragmatics for the network protocol domain. In this paper, we present and evaluate thr...... important properties of our approach: platform independence, code integratability, and code readability. The evaluation shows that our approach can generate code for a wide range of platforms which is integratable and readable....

  16. Code quality issues in student programs

    NARCIS (Netherlands)

    Keuning, H.W.; Heeren, B.J.; Jeuring, J.T.

    2017-01-01

    Because low quality code can cause serious problems in software systems, students learning to program should pay attention to code quality early. Although many studies have investigated mistakes that students make during programming, we do not know much about the quality of their code. This study

  17. Chronic impairments in spatial learning and memory in rats previously exposed to chlorpyrfos or diisopropylfluorophosphate.

    Science.gov (United States)

    Terry, A V; Beck, W D; Warner, S; Vandenhuerk, L; Callahan, P M

    2012-01-01

    The acute toxicity of organophosphates (OPs) has been studied extensively; however, much less attention has been given to the subject of repeated exposures that are not associated with overt signs of toxicity (i.e., subthreshold exposures). The objective of this study was to determine if the protracted spatial learning impairments we have observed previously after repeated subthreshold exposures to the insecticide chlorpyrifos (CPF) or the alkylphosphate OP, diisopropylfluorophosphate (DFP) persisted for longer periods after exposure. Male Wistar rats (beginning at two months of age) were initially injected subcutaneously with CPF (10.0 or 18.0mg/kg) or DFP (0.25 or 0.75 mg/kg) every other day for 30 days. After an extended OP-free washout period (behavioral testing begun 50 days after the last OP exposure), rats previously exposed to CPF, but not DFP, were impaired in a radial arm maze (RAM) win-shift task as well as a delayed non-match to position procedure. Later experiments (i.e., beginning 140 days after the last OP exposure) revealed impairments in the acquisition of a water maze hidden platform task associated with both OPs. However, only rats previously exposed to DFP were impaired in a second phase of testing when the platform location was changed (indicative of deficits of cognitive flexibility). These results indicate, therefore, that repeated, subthreshold exposures to CPF and DFP may lead to chronic deficits in spatial learning and memory (i.e., long after cholinesterase inhibition has abated) and that insecticide and alkylphosphate-based OPs may have differential effects depending on the cognitive domain evaluated. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. LeARN: a platform for detecting, clustering and annotating non-coding RNAs

    Directory of Open Access Journals (Sweden)

    Schiex Thomas

    2008-01-01

    Full Text Available Abstract Background In the last decade, sequencing projects have led to the development of a number of annotation systems dedicated to the structural and functional annotation of protein-coding genes. These annotation systems manage the annotation of the non-protein coding genes (ncRNAs in a very crude way, allowing neither the edition of the secondary structures nor the clustering of ncRNA genes into families which are crucial for appropriate annotation of these molecules. Results LeARN is a flexible software package which handles the complete process of ncRNA annotation by integrating the layers of automatic detection and human curation. Conclusion This software provides the infrastructure to deal properly with ncRNAs in the framework of any annotation project. It fills the gap between existing prediction software, that detect independent ncRNA occurrences, and public ncRNA repositories, that do not offer the flexibility and interactivity required for annotation projects. The software is freely available from the download section of the website http://bioinfo.genopole-toulouse.prd.fr/LeARN

  19. TU-CD-BRD-01: Making Incident Learning Practical and Useful: Challenges and Previous Experiences

    International Nuclear Information System (INIS)

    Ezzell, G.

    2015-01-01

    aside for audience members to contribute to the discussion. Learning Objectives: Learn how to promote the use of an incident learning system in a clinic. Learn how to convert “event reporting” into “incident learning”. See examples of practice changes that have come out of learning systems. Learn how the RO-ILS system can be used as a primary internal learning system. Learn how to create succinct, meaningful reports useful to outside readers. Gary Ezzell chairs the AAPM committee overseeing RO-ILS and has received an honorarium from ASTRO for working on the committee reviewing RO-ILS reports. Derek Brown is a director of http://TreatSafely.org . Brett Miller has previously received travel expenses and an honorarium from Varian. Phillip Beron has nothing to report

  20. TU-CD-BRD-01: Making Incident Learning Practical and Useful: Challenges and Previous Experiences

    Energy Technology Data Exchange (ETDEWEB)

    Ezzell, G. [Mayo Clinic Arizona (United States)

    2015-06-15

    aside for audience members to contribute to the discussion. Learning Objectives: Learn how to promote the use of an incident learning system in a clinic. Learn how to convert “event reporting” into “incident learning”. See examples of practice changes that have come out of learning systems. Learn how the RO-ILS system can be used as a primary internal learning system. Learn how to create succinct, meaningful reports useful to outside readers. Gary Ezzell chairs the AAPM committee overseeing RO-ILS and has received an honorarium from ASTRO for working on the committee reviewing RO-ILS reports. Derek Brown is a director of http://TreatSafely.org . Brett Miller has previously received travel expenses and an honorarium from Varian. Phillip Beron has nothing to report.

  1. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Science.gov (United States)

    Coutinho, Eduardo; Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  2. Defending Malicious Script Attacks Using Machine Learning Classifiers

    Directory of Open Access Journals (Sweden)

    Nayeem Khan

    2017-01-01

    Full Text Available The web application has become a primary target for cyber criminals by injecting malware especially JavaScript to perform malicious activities for impersonation. Thus, it becomes an imperative to detect such malicious code in real time before any malicious activity is performed. This study proposes an efficient method of detecting previously unknown malicious java scripts using an interceptor at the client side by classifying the key features of the malicious code. Feature subset was obtained by using wrapper method for dimensionality reduction. Supervised machine learning classifiers were used on the dataset for achieving high accuracy. Experimental results show that our method can efficiently classify malicious code from benign code with promising results.

  3. Expressing Youth Voice through Video Games and Coding

    Science.gov (United States)

    Martin, Crystle

    2017-01-01

    A growing body of research focuses on the impact of video games and coding on learning. The research often elevates learning the technical skills associated with video games and coding or the importance of problem solving and computational thinking, which are, of course, necessary and relevant. However, the literature less often explores how young…

  4. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding.

    Science.gov (United States)

    Travnik, Jaden B; Pilarski, Patrick M

    2017-07-01

    Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.

  5. Remembering to learn: independent place and journey coding mechanisms contribute to memory transfer.

    Science.gov (United States)

    Bahar, Amir S; Shapiro, Matthew L

    2012-02-08

    The neural mechanisms that integrate new episodes with established memories are unknown. When rats explore an environment, CA1 cells fire in place fields that indicate locations. In goal-directed spatial memory tasks, some place fields differentiate behavioral histories ("journey-dependent" place fields) while others do not ("journey-independent" place fields). To investigate how these signals inform learning and memory for new and familiar episodes, we recorded CA1 and CA3 activity in rats trained to perform a "standard" spatial memory task in a plus maze and in two new task variants. A "switch" task exchanged the start and goal locations in the same environment; an "altered environment" task contained unfamiliar local and distal cues. In the switch task, performance was mildly impaired, new firing maps were stable, but the proportion and stability of journey-dependent place fields declined. In the altered environment, overall performance was strongly impaired, new firing maps were unstable, and stable proportions of journey-dependent place fields were maintained. In both tasks, memory errors were accompanied by a decline in journey codes. The different dynamics of place and journey coding suggest that they reflect separate mechanisms and contribute to distinct memory computations. Stable place fields may represent familiar relationships among environmental features that are required for consistent memory performance. Journey-dependent activity may correspond with goal-directed behavioral sequences that reflect expectancies that generalize across environments. The complementary signals could help link current events with established memories, so that familiarity with either a behavioral strategy or an environment can inform goal-directed learning.

  6. EquiFACS: The Equine Facial Action Coding System.

    Directory of Open Access Journals (Sweden)

    Jen Wathan

    Full Text Available Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats. EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

  7. LiveCode mobile development

    CERN Document Server

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  8. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  9. Shared acoustic codes underlie emotional communication in music and speech-Evidence from deep transfer learning.

    Directory of Open Access Journals (Sweden)

    Eduardo Coutinho

    Full Text Available Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech and cross-domain experiments (i.e., models trained in one modality and tested on the other. In the cross-domain context, we evaluated two strategies-the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain.

  10. Shared acoustic codes underlie emotional communication in music and speech—Evidence from deep transfer learning

    Science.gov (United States)

    Schuller, Björn

    2017-01-01

    Music and speech exhibit striking similarities in the communication of emotions in the acoustic domain, in such a way that the communication of specific emotions is achieved, at least to a certain extent, by means of shared acoustic patterns. From an Affective Sciences points of view, determining the degree of overlap between both domains is fundamental to understand the shared mechanisms underlying such phenomenon. From a Machine learning perspective, the overlap between acoustic codes for emotional expression in music and speech opens new possibilities to enlarge the amount of data available to develop music and speech emotion recognition systems. In this article, we investigate time-continuous predictions of emotion (Arousal and Valence) in music and speech, and the Transfer Learning between these domains. We establish a comparative framework including intra- (i.e., models trained and tested on the same modality, either music or speech) and cross-domain experiments (i.e., models trained in one modality and tested on the other). In the cross-domain context, we evaluated two strategies—the direct transfer between domains, and the contribution of Transfer Learning techniques (feature-representation-transfer based on Denoising Auto Encoders) for reducing the gap in the feature space distributions. Our results demonstrate an excellent cross-domain generalisation performance with and without feature representation transfer in both directions. In the case of music, cross-domain approaches outperformed intra-domain models for Valence estimation, whereas for Speech intra-domain models achieve the best performance. This is the first demonstration of shared acoustic codes for emotional expression in music and speech in the time-continuous domain. PMID:28658285

  11. Performance analysis of WS-EWC coded optical CDMA networks with/without LDPC codes

    Science.gov (United States)

    Huang, Chun-Ming; Huang, Jen-Fa; Yang, Chao-Chin

    2010-10-01

    One extended Welch-Costas (EWC) code family for the wavelength-division-multiplexing/spectral-amplitude coding (WDM/SAC; WS) optical code-division multiple-access (OCDMA) networks is proposed. This system has a superior performance as compared to the previous modified quadratic congruence (MQC) coded OCDMA networks. However, since the performance of such a network is unsatisfactory when the data bit rate is higher, one class of quasi-cyclic low-density parity-check (QC-LDPC) code is adopted to improve that. Simulation results show that the performance of the high-speed WS-EWC coded OCDMA network can be greatly improved by using the LDPC codes.

  12. pix2code: Generating Code from a Graphical User Interface Screenshot

    OpenAIRE

    Beltramelli, Tony

    2017-01-01

    Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

  13. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    Science.gov (United States)

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  15. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  16. Synchrony and motor mimicking in chimpanzee observational learning

    Science.gov (United States)

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-01-01

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function. PMID:24923651

  17. Synchrony and motor mimicking in chimpanzee observational learning.

    Science.gov (United States)

    Fuhrmann, Delia; Ravignani, Andrea; Marshall-Pescini, Sarah; Whiten, Andrew

    2014-06-13

    Cumulative tool-based culture underwrote our species' evolutionary success, and tool-based nut-cracking is one of the strongest candidates for cultural transmission in our closest relatives, chimpanzees. However the social learning processes that may explain both the similarities and differences between the species remain unclear. A previous study of nut-cracking by initially naïve chimpanzees suggested that a learning chimpanzee holding no hammer nevertheless replicated hammering actions it witnessed. This observation has potentially important implications for the nature of the social learning processes and underlying motor coding involved. In the present study, model and observer actions were quantified frame-by-frame and analysed with stringent statistical methods, demonstrating synchrony between the observer's and model's movements, cross-correlation of these movements above chance level and a unidirectional transmission process from model to observer. These results provide the first quantitative evidence for motor mimicking underlain by motor coding in apes, with implications for mirror neuron function.

  18. Men's health promotion interventions: what have we learned from previous programmes.

    Science.gov (United States)

    Robertson, Steve; Witty, Karl; Zwolinsky, Steve; Day, Rhiannon

    2013-11-01

    Concern persists in health-related literature about men's reduced life expectancy and higher premature death rates; this is often linked to difficulties in engaging with men as a client group. However, some innovative projects and programmes, often led by health visitors or other community based nurses, have developed successful health promotion work with men. This article collates existing tacit knowledge (previous learning) about men's health interventions by integrating interview data from nine practitioners who have established such initiatives with data from 35 men's health project reports to consider 'what works'. Five themes stood out as being significant across the data reviewed: using the right setting (often outside statutory services); ensuring the right approach (drawing on male-specific interests and language); actively listening to what local men say; appropriate training (initial and ongoing) for those involved in such work; and partnership working with local community groups, businesses and statutory service providers. While not a panacea for working with any and all men, these themes form a good basis for successful engagement with men and align well with what a recent review of health visitor interventions suggest works in helping bridge service provision-uptake gaps.

  19. Phonetic radicals, not phonological coding systems, support orthographic learning via self-teaching in Chinese.

    Science.gov (United States)

    Li, Luan; Wang, Hua-Chen; Castles, Anne; Hsieh, Miao-Ling; Marinus, Eva

    2018-07-01

    According to the self-teaching hypothesis (Share, 1995), phonological decoding is fundamental to acquiring orthographic representations of novel written words. However, phonological decoding is not straightforward in non-alphabetic scripts such as Chinese, where words are presented as characters. Here, we present the first study investigating the role of phonological decoding in orthographic learning in Chinese. We examined two possible types of phonological decoding: the use of phonetic radicals, an internal phonological aid, andthe use of Zhuyin, an external phonological coding system. Seventy-three Grade 2 children were taught the pronunciations and meanings of twelve novel compound characters over four days. They were then exposed to the written characters in short stories, and were assessed on their reading accuracy and on their subsequent orthographic learning via orthographic choice and spelling tasks. The novel characters were assigned three different types of pronunciation in relation to its phonetic radical - (1) a pronunciation that is identical to the phonetic radical in isolation; (2) a common alternative pronunciation associated with the phonetic radical when it appears in other characters; and (3) a pronunciation that is unrelated to the phonetic radical. The presence of Zhuyin was also manipulated. The children read the novel characters more accurately when phonological cues from the phonetic radicals were available and in the presence of Zhuyin. However, only the phonetic radicals facilitated orthographic learning. The findings provide the first empirical evidence of orthographic learning via self-teaching in Chinese, and reveal how phonological decoding functions to support learning in non-alphabetic writing systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

    Science.gov (United States)

    Carpenter, Gail A; Gaddam, Sai Chaitanya

    2010-04-01

    Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/. Copyright 2009 Elsevier Ltd. All rights reserved.

  1. Towards a universal code formatter through machine learning

    NARCIS (Netherlands)

    Parr, T. (Terence); J.J. Vinju (Jurgen)

    2016-01-01

    textabstractThere are many declarative frameworks that allow us to implement code formatters relatively easily for any specific language, but constructing them is cumbersome. The first problem is that "everybody" wants to format their code differently, leading to either many formatter variants or a

  2. On lattices, learning with errors, cryptography, and quantum

    International Nuclear Information System (INIS)

    Regev, O.

    2004-01-01

    Full Text:Our main result is a reduction from worst-case lattice problems such as SVP and SIVP to a certain learning problem. This learning problem is a natural extension of the 'learning from parity with error' problem to higher moduli. It can also be viewed as the problem of decoding from a random linear code. This, we believe, gives a strong indication that these problems are hard. Our reduction, however, is quantum. Hence, an efficient solution to the learning problem implies a quantum algorithm for SVP and SIVP. A main open question is whether this reduction can be made classical. Using the main result, we obtain a public-key cryptosystem whose hardness is based on the worst-case quantum hardness of SVP and SIVP. Previous lattice-based public-key cryptosystems such as the one by Ajtai and Dwork were only based on unique-SVP, a special case of SVP. The new cryptosystem is much more efficient than previous cryptosystems: the public key is of size Ο((n 2 ) and encrypting a message increases its size by Ο((n) (in previous cryptosystems these values are Ο((n 4 ) and Ο(n 2 ), respectively)

  3. Recent Improvements in the SHIELD-HIT Code

    DEFF Research Database (Denmark)

    Hansen, David Christoffer; Lühr, Armin Christian; Herrmann, Rochus

    2012-01-01

    Purpose: The SHIELD-HIT Monte Carlo particle transport code has previously been used to study a wide range of problems for heavy-ion treatment and has been benchmarked extensively against other Monte Carlo codes and experimental data. Here, an improved version of SHIELD-HIT is developed concentra......Purpose: The SHIELD-HIT Monte Carlo particle transport code has previously been used to study a wide range of problems for heavy-ion treatment and has been benchmarked extensively against other Monte Carlo codes and experimental data. Here, an improved version of SHIELD-HIT is developed...

  4. Lessons learned from new construction utility demand side management programs and their implications for implementing building energy codes

    Energy Technology Data Exchange (ETDEWEB)

    Wise, B.K.; Hughes, K.R.; Danko, S.L.; Gilbride, T.L.

    1994-07-01

    This report was prepared for the US Department of Energy (DOE) Office of Codes and Standards by the Pacific Northwest Laboratory (PNL) through its Building Energy Standards Program (BESP). The purpose of this task was to identify demand-side management (DSM) strategies for new construction that utilities have adopted or developed to promote energy-efficient design and construction. PNL conducted a survey of utilities and used the information gathered to extrapolate lessons learned and to identify evolving trends in utility new-construction DSM programs. The ultimate goal of the task is to identify opportunities where states might work collaboratively with utilities to promote the adoption, implementation, and enforcement of energy-efficient building energy codes.

  5. Learning Concepts, Language, and Literacy in Hybrid Linguistic Codes: The Multilingual Maze of Urban Grade 1 Classrooms in South Africa

    Science.gov (United States)

    Henning, Elizabeth

    2012-01-01

    From the field of developmental psycholinguistics and from conceptual development theory there is evidence that excessive linguistic "code-switching" in early school education may pose some hazards for the learning of young multilingual children. In this article the author addresses the issue, invoking post-Piagetian and neo-Vygotskian…

  6. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  7. Artificial Intelligence Learning Semantics via External Resources for Classifying Diagnosis Codes in Discharge Notes.

    Science.gov (United States)

    Lin, Chin; Hsu, Chia-Jung; Lou, Yu-Sheng; Yeh, Shih-Jen; Lee, Chia-Cheng; Su, Sui-Lung; Chen, Hsiang-Cheng

    2017-11-06

    Automated disease code classification using free-text medical information is important for public health surveillance. However, traditional natural language processing (NLP) pipelines are limited, so we propose a method combining word embedding with a convolutional neural network (CNN). Our objective was to compare the performance of traditional pipelines (NLP plus supervised machine learning models) with that of word embedding combined with a CNN in conducting a classification task identifying International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) diagnosis codes in discharge notes. We used 2 classification methods: (1) extracting from discharge notes some features (terms, n-gram phrases, and SNOMED CT categories) that we used to train a set of supervised machine learning models (support vector machine, random forests, and gradient boosting machine), and (2) building a feature matrix, by a pretrained word embedding model, that we used to train a CNN. We used these methods to identify the chapter-level ICD-10-CM diagnosis codes in a set of discharge notes. We conducted the evaluation using 103,390 discharge notes covering patients hospitalized from June 1, 2015 to January 31, 2017 in the Tri-Service General Hospital in Taipei, Taiwan. We used the receiver operating characteristic curve as an evaluation measure, and calculated the area under the curve (AUC) and F-measure as the global measure of effectiveness. In 5-fold cross-validation tests, our method had a higher testing accuracy (mean AUC 0.9696; mean F-measure 0.9086) than traditional NLP-based approaches (mean AUC range 0.8183-0.9571; mean F-measure range 0.5050-0.8739). A real-world simulation that split the training sample and the testing sample by date verified this result (mean AUC 0.9645; mean F-measure 0.9003 using the proposed method). Further analysis showed that the convolutional layers of the CNN effectively identified a large number of keywords and automatically

  8. Investigating the use of quick response codes in the gross anatomy laboratory.

    Science.gov (United States)

    Traser, Courtney J; Hoffman, Leslie A; Seifert, Mark F; Wilson, Adam B

    2015-01-01

    The use of quick response (QR) codes within undergraduate university courses is on the rise, yet literature concerning their use in medical education is scant. This study examined student perceptions on the usefulness of QR codes as learning aids in a medical gross anatomy course, statistically analyzed whether this learning aid impacted student performance, and evaluated whether performance could be explained by the frequency of QR code usage. Question prompts and QR codes tagged on cadaveric specimens and models were available for four weeks as learning aids to medical (n = 155) and doctor of physical therapy (n = 39) students. Each QR code provided answers to posed questions in the form of embedded text or hyperlinked web pages. Students' perceptions were gathered using a formative questionnaire and practical examination scores were used to assess potential gains in student achievement. Overall, students responded positively to the use of QR codes in the gross anatomy laboratory as 89% (57/64) agreed the codes augmented their learning of anatomy. The users' most noticeable objection to using QR codes was the reluctance to bring their smartphones into the gross anatomy laboratory. A comparison between the performance of QR code users and non-users was found to be nonsignificant (P = 0.113), and no significant gains in performance (P = 0.302) were observed after the intervention. Learners welcomed the implementation of QR code technology in the gross anatomy laboratory, yet this intervention had no apparent effect on practical examination performance. © 2014 American Association of Anatomists.

  9. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  10. Suture Coding: A Novel Educational Guide for Suture Patterns.

    Science.gov (United States)

    Gaber, Mohamed; Abdel-Wahed, Ramadan

    2015-01-01

    This study aims to provide a helpful guide to perform tissue suturing successfully using suture coding-a method for identification of suture patterns and techniques by giving full information about the method of application of each pattern using numbers and symbols. Suture coding helps construct an infrastructure for surgical suture science. It facilitates the easy understanding and learning of suturing techniques and patterns as well as detects the relationship between the different patterns. Guide points are fixed on both edges of the wound to act as a guideline to help practice suture pattern techniques. The arrangement is fixed as 1-3-5-7 and a-c-e-g on one side (whether right or left) and as 2-4-6-8 and b-d-f-h on the other side. Needle placement must start from number 1 or letter "a" and continue to follow the code till the end of the stitching. Some rules are created to be adopted for the application of suture coding. A suture trainer containing guide points that simulate the coding process is used to facilitate the learning of the coding method. (120) Is the code of simple interrupted suture pattern; (ab210) is the code of vertical mattress suture pattern, and (013465)²/3 is the code of Cushing suture pattern. (0A1) Is suggested as a surgical suture language that gives the name and type of the suture pattern used to facilitate its identification. All suture patterns known in the world should start with (0), (A), or (1). There is a relationship between 2 or more surgical patterns according to their codes. It can be concluded that every suture pattern has its own code that helps in the identification of its type, structure, and method of application. Combination between numbers and symbols helps in the understanding of suture techniques easily without complication. There are specific relationships that can be identified between different suture patterns. Coding methods facilitate suture patterns learning process. The use of suture coding can be a good

  11. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Science.gov (United States)

    Hu, Pingzhao; Janga, Sarath Chandra; Babu, Mohan; Díaz-Mejía, J Javier; Butland, Gareth; Yang, Wenhong; Pogoutse, Oxana; Guo, Xinghua; Phanse, Sadhna; Wong, Peter; Chandran, Shamanta; Christopoulos, Constantine; Nazarians-Armavil, Anaies; Nasseri, Negin Karimi; Musso, Gabriel; Ali, Mehrab; Nazemof, Nazila; Eroukova, Veronika; Golshani, Ashkan; Paccanaro, Alberto; Greenblatt, Jack F; Moreno-Hagelsieb, Gabriel; Emili, Andrew

    2009-04-28

    One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans). Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  12. Cost reducing code implementation strategies

    International Nuclear Information System (INIS)

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  13. List Decoding of Algebraic Codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  14. Performance evaluation based on data from code reviews

    OpenAIRE

    Andrej, Sekáč

    2016-01-01

    Context. Modern code review tools such as Gerrit have made available great amounts of code review data from different open source projects as well as other commercial projects. Code reviews are used to keep the quality of produced source code under control but the stored data could also be used for evaluation of the software development process. Objectives. This thesis uses machine learning methods for an approximation of review expert’s performance evaluation function. Due to limitations in ...

  15. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  16. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    Science.gov (United States)

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  17. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Science.gov (United States)

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  18. Australasian code for reporting of mineral resources and ore reserves (the JORC code)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-06-01

    The latest revision of the Code first published in 1989 becomes effective in September 1999. It was prepared by the Joint Ores Reserves Committee of the Australasian Institute of Mining and Metallurgy, Australian Institute of Geoscientists and Minerals Council of Australia (JORC). It sets out minimum standards, recommendations and guidelines for public reporting of exploration results, mineral resources and ore reserves in Australasia. In this edition, the guidelines, which were previously separated from the Code, have been placed after the respective Code clauses. The Code is applicable to all solid minerals, including diamonds, other gemstones and coal for which public reporting is required by the Australian and New Zealand Stock Exchanges.

  19. Entanglement-assisted quantum low-density parity-check codes

    International Nuclear Information System (INIS)

    Fujiwara, Yuichiro; Clark, David; Tonchev, Vladimir D.; Vandendriessche, Peter; De Boeck, Maarten

    2010-01-01

    This article develops a general method for constructing entanglement-assisted quantum low-density parity-check (LDPC) codes, which is based on combinatorial design theory. Explicit constructions are given for entanglement-assisted quantum error-correcting codes with many desirable properties. These properties include the requirement of only one initial entanglement bit, high error-correction performance, high rates, and low decoding complexity. The proposed method produces several infinite families of codes with a wide variety of parameters and entanglement requirements. Our framework encompasses the previously known entanglement-assisted quantum LDPC codes having the best error-correction performance and many other codes with better block error rates in simulations over the depolarizing channel. We also determine important parameters of several well-known classes of quantum and classical LDPC codes for previously unsettled cases.

  20. Learning by Doing: Twenty Successful Active Learning Exercises for Information Systems Courses

    Directory of Open Access Journals (Sweden)

    Alanah Mitchell

    2017-01-01

    Full Text Available Aim/Purpose: This paper provides a review of previously published work related to active learning in information systems (IS courses. Background: There are a rising number of strategies in higher education that offer promise in regards to getting students’ attention and helping them learn, such as flipped classrooms and offering courses online. These learning strategies are part of the pedagogical technique known as active learning. Active learning is a strategy that became popular in the early 1990s and has proven itself as a valid tool for helping students to be engaged with learning. Methodology: This work follows a systematic method for identifying and coding previous research based on an aspect of interest. The authors identified and assessed research through a search of ABI/Inform scholarly journal abstracts and keywords, as well as additional research databases, using the search terms “active learning” and “information systems” from 2000 through June 2016. Contribution: This synthesis of active learning exercises provides guidance for information technology faculty looking to implement active learning strategies in their classroom by demonstrating how IS faculty might begin to introduce more active learning techniques in their teaching as well as by presenting a sample teaching agenda for a class that uses a mix of active and passive learning techniques to engage student learning. Findings: Twenty successful types of active learning exercises in IS courses are presented. Recommendations for Practitioners\t: This paper offers a “how to” resource of successful active learning strategies for IS faculty interested in implementing active learning in the classroom. Recommendation for Researchers: This work provides an example of a systematic literature review as a means to assess successful implementations of active learning in IS. Impact on Society: An updated definition of active learning is presented as well as a meaningful

  1. Linguistic coding deficits in foreign language learners.

    Science.gov (United States)

    Sparks, R; Ganschow, L; Pohlman, J

    1989-01-01

    As increasing numbers of colleges and universities require a foreign language for graduation in at least one of their degree programs, reports of students with difficulties in learning a second language are multiplying. Until recently, little research has been conducted to identify the nature of this problem. Recent attempts by the authors have focused upon subtle but ongoing language difficulties in these individuals as the source of their struggle to learn a foreign language. The present paper attempts to expand upon this concept by outlining a theoretical framework based upon a linguistic coding model that hypothesizes deficits in the processing of phonological, syntactic, and/or semantic information. Traditional psychoeducational assessment batteries of standardized intelligence and achievement tests generally are not sensitive to these linguistic coding deficits unless closely analyzed or, more often, used in conjunction with a more comprehensive language assessment battery. Students who have been waived from a foreign language requirement and their proposed type(s) of linguistic coding deficits are profiled. Tentative conclusions about the nature of these foreign language learning deficits are presented along with specific suggestions for tests to be used in psychoeducational evaluations.

  2. Overview of codes and tools for nuclear engineering education

    Science.gov (United States)

    Yakovlev, D.; Pryakhin, A.; Medvedeva, L.

    2017-01-01

    The recent world trends in nuclear education have been developed in the direction of social education, networking, virtual tools and codes. MEPhI as a global leader on the world education market implements new advanced technologies for the distance and online learning and for student research work. MEPhI produced special codes, tools and web resources based on the internet platform to support education in the field of nuclear technology. At the same time, MEPhI actively uses codes and tools from the third parties. Several types of the tools are considered: calculation codes, nuclear data visualization tools, virtual labs, PC-based educational simulators for nuclear power plants (NPP), CLP4NET, education web-platforms, distance courses (MOOCs and controlled and managed content systems). The university pays special attention to integrated products such as CLP4NET, which is not a learning course, but serves to automate the process of learning through distance technologies. CLP4NET organizes all tools in the same information space. Up to now, MEPhI has achieved significant results in the field of distance education and online system implementation.

  3. Performance Analysis of CRC Codes for Systematic and Nonsystematic Polar Codes with List Decoding

    Directory of Open Access Journals (Sweden)

    Takumi Murata

    2018-01-01

    Full Text Available Successive cancellation list (SCL decoding of polar codes is an effective approach that can significantly outperform the original successive cancellation (SC decoding, provided that proper cyclic redundancy-check (CRC codes are employed at the stage of candidate selection. Previous studies on CRC-assisted polar codes mostly focus on improvement of the decoding algorithms as well as their implementation, and little attention has been paid to the CRC code structure itself. For the CRC-concatenated polar codes with CRC code as their outer code, the use of longer CRC code leads to reduction of information rate, whereas the use of shorter CRC code may reduce the error detection probability, thus degrading the frame error rate (FER performance. Therefore, CRC codes of proper length should be employed in order to optimize the FER performance for a given signal-to-noise ratio (SNR per information bit. In this paper, we investigate the effect of CRC codes on the FER performance of polar codes with list decoding in terms of the CRC code length as well as its generator polynomials. Both the original nonsystematic and systematic polar codes are considered, and we also demonstrate that different behaviors of CRC codes should be observed depending on whether the inner polar code is systematic or not.

  4. De novo ORFs in Drosophila are important to organismal fitness and evolved rapidly from previously non-coding sequences.

    Directory of Open Access Journals (Sweden)

    Josephine A Reinhardt

    Full Text Available How non-coding DNA gives rise to new protein-coding genes (de novo genes is not well understood. Recent work has revealed the origins and functions of a few de novo genes, but common principles governing the evolution or biological roles of these genes are unknown. To better define these principles, we performed a parallel analysis of the evolution and function of six putatively protein-coding de novo genes described in Drosophila melanogaster. Reconstruction of the transcriptional history of de novo genes shows that two de novo genes emerged from novel long non-coding RNAs that arose at least 5 MY prior to evolution of an open reading frame. In contrast, four other de novo genes evolved a translated open reading frame and transcription within the same evolutionary interval suggesting that nascent open reading frames (proto-ORFs, while not required, can contribute to the emergence of a new de novo gene. However, none of the genes arose from proto-ORFs that existed long before expression evolved. Sequence and structural evolution of de novo genes was rapid compared to nearby genes and the structural complexity of de novo genes steadily increases over evolutionary time. Despite the fact that these genes are transcribed at a higher level in males than females, and are most strongly expressed in testes, RNAi experiments show that most of these genes are essential in both sexes during metamorphosis. This lethality suggests that protein coding de novo genes in Drosophila quickly become functionally important.

  5. Statistical coding and decoding of heartbeat intervals.

    Science.gov (United States)

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  6. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  7. Learning binary code via PCA of angle projection for image retrieval

    Science.gov (United States)

    Yang, Fumeng; Ye, Zhiqiang; Wei, Xueqi; Wu, Congzhong

    2018-01-01

    With benefits of low storage costs and high query speeds, binary code representation methods are widely researched for efficiently retrieving large-scale data. In image hashing method, learning hashing function to embed highdimensions feature to Hamming space is a key step for accuracy retrieval. Principal component analysis (PCA) technical is widely used in compact hashing methods, and most these hashing methods adopt PCA projection functions to project the original data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit by thresholding. The variances of different projected dimensions are different, and with real-valued projection produced more quantization error. To avoid the real-valued projection with large quantization error, in this paper we proposed to use Cosine similarity projection for each dimensions, the angle projection can keep the original structure and more compact with the Cosine-valued. We used our method combined the ITQ hashing algorithm, and the extensive experiments on the public CIFAR-10 and Caltech-256 datasets validate the effectiveness of the proposed method.

  8. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  9. The International Code of Marketing of Breast-milk Substitutes: lessons learned and implications for the regulation of marketing of foods and beverages to children.

    Science.gov (United States)

    Lutter, Chessa K

    2013-10-01

    To identify lessons learned from 30 years of implementing the International Code of Marketing of Breast-milk Substitutes (‘the Code’) and identify lessons learned for the regulation of marketing foods and beverages to children. Historical analysis of 30 years of implementing the Code. Latin America and the Caribbean. None. Legislation to restrict marketing of breast-milk substitutes is necessary but not sufficient; equally important are the promulgation of implementing regulations, effective enforcement and public monitoring of compliance. A system of funding for regular monitoring of compliance with legislation should be explicitlyd eveloped and funded from the beginning. Economic sanctions, while important, are likely to be less effective than reports that affect a company’s public image negatively. Non-governmental organizations play a critical role in leveraging public opinion and galvanizing consumer pressure to ensure that governments adopt regulations and companies adhere to them. Continual clinical, epidemiological and policy research showing the link between marketing and health outcomes and between policy and better health is essential. Implementation of the Code has not come easily as it places the interests of underfinanced national governments and international and non-governmental organizations promoting breast-feeding against those of multinational corporations that make hundreds of millions of dollars annually marketing infant formulas. Efforts to protect, promote and support breast-feeding have been successful with indicators of breast-feeding practices increasing globally. The lessons learned can inform current efforts to regulate the marketing of foods and beverages to children.

  10. Global functional atlas of Escherichia coli encompassing previously uncharacterized proteins.

    Directory of Open Access Journals (Sweden)

    Pingzhao Hu

    2009-04-01

    Full Text Available One-third of the 4,225 protein-coding genes of Escherichia coli K-12 remain functionally unannotated (orphans. Many map to distant clades such as Archaea, suggesting involvement in basic prokaryotic traits, whereas others appear restricted to E. coli, including pathogenic strains. To elucidate the orphans' biological roles, we performed an extensive proteomic survey using affinity-tagged E. coli strains and generated comprehensive genomic context inferences to derive a high-confidence compendium for virtually the entire proteome consisting of 5,993 putative physical interactions and 74,776 putative functional associations, most of which are novel. Clustering of the respective probabilistic networks revealed putative orphan membership in discrete multiprotein complexes and functional modules together with annotated gene products, whereas a machine-learning strategy based on network integration implicated the orphans in specific biological processes. We provide additional experimental evidence supporting orphan participation in protein synthesis, amino acid metabolism, biofilm formation, motility, and assembly of the bacterial cell envelope. This resource provides a "systems-wide" functional blueprint of a model microbe, with insights into the biological and evolutionary significance of previously uncharacterized proteins.

  11. Usage of burnt fuel isotopic compositions from engineering codes in Monte-Carlo code calculations

    International Nuclear Information System (INIS)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I.

    2015-01-01

    A burn-up calculation of VVER's cores by Monte-Carlo code is complex process and requires large computational costs. This fact makes Monte-Carlo codes usage complicated for project and operating calculations. Previously prepared isotopic compositions are proposed to use for the Monte-Carlo code (MCU) calculations of different states of VVER's core with burnt fuel. Isotopic compositions are proposed to calculate by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by engineering codes (TVS-M, PERMAK-A). The multiplication factors and power distributions of FA and VVER with infinite height are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The MCU calculation data were compared with the data which were obtained by engineering codes.

  12. Active Learning for Directed Exploration of Complex Systems

    Science.gov (United States)

    Burl, Michael C.; Wang, Esther

    2009-01-01

    Physics-based simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. Such codes provide the highest-fidelity representation of system behavior, but are often so slow to run that insight into the system is limited. For example, conducting an exhaustive sweep over a d-dimensional input parameter space with k-steps along each dimension requires k(sup d) simulation trials (translating into k(sup d) CPU-days for one of our current simulations). An alternative is directed exploration in which the next simulation trials are cleverly chosen at each step. Given the results of previous trials, supervised learning techniques (SVM, KDE, GP) are applied to build up simplified predictive models of system behavior. These models are then used within an active learning framework to identify the most valuable trials to run next. Several active learning strategies are examined including a recently-proposed information-theoretic approach. Performance is evaluated on a set of thirteen synthetic oracles, which serve as surrogates for the more expensive simulations and enable the experiments to be replicated by other researchers.

  13. The HTM Spatial Pooler—A Neocortical Algorithm for Online Sparse Distributed Coding

    Directory of Open Access Journals (Sweden)

    Yuwei Cui

    2017-11-01

    Full Text Available Hierarchical temporal memory (HTM provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP. The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  14. The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.

    Science.gov (United States)

    Cui, Yuwei; Ahmad, Subutai; Hawkins, Jeff

    2017-01-01

    Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

  15. Content Analysis Coding Schemes for Online Asynchronous Discussion

    Science.gov (United States)

    Weltzer-Ward, Lisa

    2011-01-01

    Purpose: Researchers commonly utilize coding-based analysis of classroom asynchronous discussion contributions as part of studies of online learning and instruction. However, this analysis is inconsistent from study to study with over 50 coding schemes and procedures applied in the last eight years. The aim of this article is to provide a basis…

  16. Semantic and phonological coding in poor and normal readers.

    Science.gov (United States)

    Vellutino, F R; Scanlon, D M; Spearing, D

    1995-02-01

    Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.

  17. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Directory of Open Access Journals (Sweden)

    Wiktor eMlynarski

    2014-03-01

    Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  18. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  19. Development and evaluation of a Naïve Bayesian model for coding causation of workers' compensation claims.

    Science.gov (United States)

    Bertke, S J; Meyers, A R; Wurzelbacher, S J; Bell, J; Lampl, M L; Robins, D

    2012-12-01

    Tracking and trending rates of injuries and illnesses classified as musculoskeletal disorders caused by ergonomic risk factors such as overexertion and repetitive motion (MSDs) and slips, trips, or falls (STFs) in different industry sectors is of high interest to many researchers. Unfortunately, identifying the cause of injuries and illnesses in large datasets such as workers' compensation systems often requires reading and coding the free form accident text narrative for potentially millions of records. To alleviate the need for manual coding, this paper describes and evaluates a computer auto-coding algorithm that demonstrated the ability to code millions of claims quickly and accurately by learning from a set of previously manually coded claims. The auto-coding program was able to code claims as a musculoskeletal disorders, STF or other with approximately 90% accuracy. The program developed and discussed in this paper provides an accurate and efficient method for identifying the causation of workers' compensation claims as a STF or MSD in a large database based on the unstructured text narrative and resulting injury diagnoses. The program coded thousands of claims in minutes. The method described in this paper can be used by researchers and practitioners to relieve the manual burden of reading and identifying the causation of claims as a STF or MSD. Furthermore, the method can be easily generalized to code/classify other unstructured text narratives. Published by Elsevier Ltd.

  20. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  1. Learning Joint-Sparse Codes for Calibration-Free Parallel MR Imaging.

    Science.gov (United States)

    Wang, Shanshan; Tan, Sha; Gao, Yuan; Liu, Qiegen; Ying, Leslie; Xiao, Taohui; Liu, Yuanyuan; Liu, Xin; Zheng, Hairong; Liang, Dong

    2018-01-01

    The integration of compressed sensing and parallel imaging (CS-PI) has shown an increased popularity in recent years to accelerate magnetic resonance (MR) imaging. Among them, calibration-free techniques have presented encouraging performances due to its capability in robustly handling the sensitivity information. Unfortunately, existing calibration-free methods have only explored joint-sparsity with direct analysis transform projections. To further exploit joint-sparsity and improve reconstruction accuracy, this paper proposes to Learn joINt-sparse coDes for caliBration-free parallEl mR imaGing (LINDBERG) by modeling the parallel MR imaging problem as an - - minimization objective with an norm constraining data fidelity, Frobenius norm enforcing sparse representation error and the mixed norm triggering joint sparsity across multichannels. A corresponding algorithm has been developed to alternatively update the sparse representation, sensitivity encoded images and K-space data. Then, the final image is produced as the square root of sum of squares of all channel images. Experimental results on both physical phantom and in vivo data sets show that the proposed method is comparable and even superior to state-of-the-art CS-PI reconstruction approaches. Specifically, LINDBERG has presented strong capability in suppressing noise and artifacts while reconstructing MR images from highly undersampled multichannel measurements.

  2. TEACHERS’ AND STUDENTS’ ATTITUDE TOWARD CODE ALTERNATION IN PAKISTANI ENGLISH CLASSROOMS

    Directory of Open Access Journals (Sweden)

    Aqsa Tahir

    2016-11-01

    Full Text Available This research is an attempt to explore students‟ and teachers‟ attitude towards code alternation within English classrooms in Pakistan. In a country like Pakistan where official language is English, the national language is Urdu, and every province has its own language, most of the people are bilinguals or multilingual. Therefore, the aim of this study was to find out when and why teachers code switch in L2 English classrooms. It has also explored student‟s preferences of language during learning second language. It has also looked into teachers‟ code-switching patterns and the students‟ priorities. Ten teachers responded to an open ended questioner and 100 students responded to a close ended questioner. Results of teacher‟s responses indicated that they mostly code switch when student‟s response in relation to the comprehensibility is negative and they do not grasp the concepts easily in L2. They never encourage students to speak Urdu. Student‟s results showed that they mostly prefer code-switching into their L1 for better understanding and participation in class. Analysis revealed that students only favored English while getting instructions of test, receiving results, and learning grammatical concepts. In most of the cases, students showed flexibility in language usage. Majority of students (68% agreed upon that they learn better when their teachers code switch in to L1.

  3. New nonbinary quantum codes with larger distance constructed from BCH codes over 𝔽q2

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Fu, Qiang; Ma, Yuena; Guo, Luobin

    2017-03-01

    This paper concentrates on construction of new nonbinary quantum error-correcting codes (QECCs) from three classes of narrow-sense imprimitive BCH codes over finite field 𝔽q2 (q ≥ 3 is an odd prime power). By a careful analysis on properties of cyclotomic cosets in defining set T of these BCH codes, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing BCH codes is determined to be much larger than the result given according to Aly et al. [S. A. Aly, A. Klappenecker and P. K. Sarvepalli, IEEE Trans. Inf. Theory 53, 1183 (2007)] for each different code length. Thus families of new nonbinary QECCs are constructed, and the newly obtained QECCs have larger distance than those in previous literature.

  4. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  5. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  6. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer

    2017-12-25

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images independently. However, learning multidimensional dictionaries and sparse codes for the reconstruction of multi-dimensional data is very important, as it examines correlations among all the data jointly. This provides more capacity for the learned dictionaries to better reconstruct data. In this paper, we propose a generic and novel formulation for the CSC problem that can handle an arbitrary order tensor of data. Backed with experimental results, our proposed formulation can not only tackle applications that are not possible with standard CSC solvers, including colored video reconstruction (5D- tensors), but it also performs favorably in reconstruction with much fewer parameters as compared to naive extensions of standard CSC to multiple features/channels.

  7. An in-depth study of sparse codes on abnormality detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2016-01-01

    Sparse representation has been applied successfully in abnormal event detection, in which the baseline is to learn a dictionary accompanied by sparse codes. While much emphasis is put on discriminative dictionary construction, there are no comparative studies of sparse codes regarding abnormality...... are carried out from various angles to better understand the applicability of sparse codes, including computation time, reconstruction error, sparsity, detection accuracy, and their performance combining various detection methods. The experiment results show that combining OMP codes with maximum coordinate...

  8. Learn ggplot2 using Shiny App

    CERN Document Server

    Moon, Keon-Woong

    2016-01-01

    This book and app is for practitioners, professionals, researchers, and students who want to learn how to make a plot within the R environment using ggplot2, step-by-step without coding. In widespread use in the statistical communities, R is a free software language and environment for statistical programming and graphics. Many users find R to have a steep learning curve but to be extremely useful once overcome. ggplot2 is an extremely popular package tailored for producing graphics within R but which requires coding and has a steep learning curve itself, and Shiny is an open source R package that provides a web framework for building web applications using R without requiring HTML, CSS, or JavaScript. This manual—"integrating" R, ggplot2, and Shiny—introduces a new Shiny app, Learn ggplot2, that allows users to make plots easily without coding. With the Learn ggplot2 Shiny app, users can make plots using ggplot2 without having to code each step, reducing typos and error messages and allowing users to bec...

  9. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  10. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  11. Proposing a Web-Based Tutorial System to Teach Malay Language Braille Code to the Sighted

    Science.gov (United States)

    Wah, Lee Lay; Keong, Foo Kok

    2010-01-01

    The "e-KodBrailleBM Tutorial System" is a web-based tutorial system which is specially designed to teach, facilitate and support the learning of Malay Language Braille Code to individuals who are sighted. The targeted group includes special education teachers, pre-service teachers, and parents. Learning Braille code involves memorisation…

  12. Computer codes for RF cavity design

    International Nuclear Information System (INIS)

    Ko, K.

    1992-08-01

    In RF cavity design, numerical modeling is assuming an increasingly important role with the help of sophisticated computer codes and powerful yet affordable computers. A description of the cavity codes in use in the accelerator community has been given previously. The present paper will address the latest developments and discuss their applications to cavity toning and matching problems

  13. Understanding Infants' and Children's Social Learning about Foods: Previous Research and New Prospects

    Science.gov (United States)

    Shutts, Kristin; Kinzler, Katherine D.; DeJesus, Jasmine M.

    2013-01-01

    Developmental psychologists have devoted significant attention to investigating how children learn from others' actions, emotions, and testimony. Yet most of this research has examined children's socially guided learning about artifacts. The present article focuses on a domain that has received limited attention from those interested in the…

  14. Imagery and Verbal Coding Approaches in Chinese Vocabulary Instruction

    Science.gov (United States)

    Shen, Helen H.

    2010-01-01

    This study consists of two instructional experiments. Within the framework of dual coding theory, the study compares the learning effects of two instructional encoding methods used in Chinese vocabulary instruction among students learning beginning Chinese as a foreign language. One method uses verbal encoding only, and the other method uses…

  15. Developing a Code of Practice for Learning Analytics

    Science.gov (United States)

    Sclater, Niall

    2016-01-01

    Ethical and legal objections to learning analytics are barriers to development of the field, thus potentially denying students the benefits of predictive analytics and adaptive learning. Jisc, a charitable organization that champions the use of digital technologies in UK education and research, has attempted to address this with the development of…

  16. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  17. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  18. Women with learning disabilities and access to cervical screening: retrospective cohort study using case control methods

    Science.gov (United States)

    Reynolds, Fiona; Stanistreet, Debbi; Elton, Peter

    2008-01-01

    Background Several studies in the UK have suggested that women with learning disabilities may be less likely to receive cervical screening tests and a previous local study in had found that GPs considered screening unnecessary for women with learning disabilities. This study set out to ascertain whether women with learning disabilities are more likely to be ceased from a cervical screening programme than women without; and to examine the reasons given for ceasing women with learning disabilities. It was carried out in Bury, Heywood-and-Middleton and Rochdale. Methods Carried out using retrospective cohort study methods, women with learning disabilities were identified by Read code; and their cervical screening records were compared with the Call-and-Recall records of women without learning disabilities in order to examine their screening histories. Analysis was carried out using case-control methods – 1:2 (women with learning disabilities: women without learning disabilities), calculating odds ratios. Results 267 women's records were compared with the records of 534 women without learning disabilities. Women with learning disabilities had an odds ratio (OR) of 0.48 (Confidence Interval (CI) 0.38 – 0.58; X2: 72.227; p.value learning disabilities. Conclusion The reasons given for ceasing and/or not screening suggest that merely being coded as having a learning disability is not the sole reason for these actions. There are training needs among smear takers regarding appropriate reasons not to screen and providing screening for women with learning disabilities. PMID:18218106

  19. Some aspects of grading Java code submissions in MOOCs

    Directory of Open Access Journals (Sweden)

    Sándor Király

    2017-07-01

    Full Text Available Recently, massive open online courses (MOOCs have been offering a new online approach in the field of distance learning and online education. A typical MOOC course consists of video lectures, reading material and easily accessible tests for students. For a computer programming course, it is important to provide interactive, dynamic, online coding exercises and more complex programming assignments for learners. It is expedient for the students to receive prompt feedback on their coding submissions. Although MOOC automated programme evaluation subsystem is capable of assessing source programme files that are in learning management systems, in MOOC systems there is a grader that is responsible for evaluating students’ assignments with the result that course staff would be required to assess thousands of programmes submitted by the participants of the course without the benefit of an automatic grader. This paper presents a new concept for grading programming submissions of students and improved techniques based on the Java unit testing framework that enables automatic grading of code chunks. Some examples are also given such as the creation of unique exercises by dynamically generating the parameters of the assignment in a MOOC programming course combined with the kind of coding style recognition to teach coding standards.

  20. Computer codes for RF cavity design

    International Nuclear Information System (INIS)

    Ko, K.

    1992-01-01

    In RF cavity design, numerical modeling is assuming an increasingly important role with the help of sophisticated computer codes and powerful yet affordable computers. A description of the cavity codes in use in the accelerator community has been given previously. The present paper will address the latest developments and discuss their applications to cavity tuning and matching problems. (Author) 8 refs., 10 figs

  1. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  2. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  3. Classification of multispectral or hyperspectral satellite imagery using clustering of sparse approximations on sparse representations in learned dictionaries obtained using efficient convolutional sparse coding

    Science.gov (United States)

    Moody, Daniela; Wohlberg, Brendt

    2018-01-02

    An approach for land cover classification, seasonal and yearly change detection and monitoring, and identification of changes in man-made features may use a clustering of sparse approximations (CoSA) on sparse representations in learned dictionaries. The learned dictionaries may be derived using efficient convolutional sparse coding to build multispectral or hyperspectral, multiresolution dictionaries that are adapted to regional satellite image data. Sparse image representations of images over the learned dictionaries may be used to perform unsupervised k-means clustering into land cover categories. The clustering process behaves as a classifier in detecting real variability. This approach may combine spectral and spatial textural characteristics to detect geologic, vegetative, hydrologic, and man-made features, as well as changes in these features over time.

  4. Color Coding of Circuit Quantities in Introductory Circuit Analysis Instruction

    Science.gov (United States)

    Reisslein, Jana; Johnson, Amy M.; Reisslein, Martin

    2015-01-01

    Learning the analysis of electrical circuits represented by circuit diagrams is often challenging for novice students. An open research question in electrical circuit analysis instruction is whether color coding of the mathematical symbols (variables) that denote electrical quantities can improve circuit analysis learning. The present study…

  5. One way quantum repeaters with quantum Reed-Solomon codes

    OpenAIRE

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-01-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of $d$-level systems for large dimension $d$. We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generation of quantum repeaters using quan...

  6. Identification of coding and non-coding mutational hotspots in cancer genomes.

    Science.gov (United States)

    Piraino, Scott W; Furney, Simon J

    2017-01-05

    The identification of mutations that play a causal role in tumour development, so called "driver" mutations, is of critical importance for understanding how cancers form and how they might be treated. Several large cancer sequencing projects have identified genes that are recurrently mutated in cancer patients, suggesting a role in tumourigenesis. While the landscape of coding drivers has been extensively studied and many of the most prominent driver genes are well characterised, comparatively less is known about the role of mutations in the non-coding regions of the genome in cancer development. The continuing fall in genome sequencing costs has resulted in a concomitant increase in the number of cancer whole genome sequences being produced, facilitating systematic interrogation of both the coding and non-coding regions of cancer genomes. To examine the mutational landscapes of tumour genomes we have developed a novel method to identify mutational hotspots in tumour genomes using both mutational data and information on evolutionary conservation. We have applied our methodology to over 1300 whole cancer genomes and show that it identifies prominent coding and non-coding regions that are known or highly suspected to play a role in cancer. Importantly, we applied our method to the entire genome, rather than relying on predefined annotations (e.g. promoter regions) and we highlight recurrently mutated regions that may have resulted from increased exposure to mutational processes rather than selection, some of which have been identified previously as targets of selection. Finally, we implicate several pan-cancer and cancer-specific candidate non-coding regions, which could be involved in tumourigenesis. We have developed a framework to identify mutational hotspots in cancer genomes, which is applicable to the entire genome. This framework identifies known and novel coding and non-coding mutional hotspots and can be used to differentiate candidate driver regions from

  7. One-way quantum repeaters with quantum Reed-Solomon codes

    Science.gov (United States)

    Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang

    2018-05-01

    We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.

  8. An Outline of the New Norwegian Criminal Code

    Directory of Open Access Journals (Sweden)

    Jørn Jacobsen

    2015-12-01

    Full Text Available This article gives an overview of the new criminal code, its background and content. It maps out the code’s background, the legislative process and central ideas. Furthermore, the article gives an outline of the general criteria for criminal responsibility according to the code, the offences and forms of punishment and other reactions. The article emphasises the most important changes from the previous code of 1902. To some degree, strengths and weaknesses of the new code are addressed.

  9. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  10. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  11. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  12. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  13. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    Directory of Open Access Journals (Sweden)

    Zhi Gao

    2018-05-01

    Full Text Available Light detection and ranging (LiDAR sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs and unmanned aerial vehicles (UAVs to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  14. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    Science.gov (United States)

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  15. Performance analysis of multiple interference suppression over asynchronous/synchronous optical code-division multiple-access system based on complementary/prime/shifted coding scheme

    Science.gov (United States)

    Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa

    2011-08-01

    A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.

  16. Navigation towards a goal position: from reactive to generalised learned control

    Energy Technology Data Exchange (ETDEWEB)

    Freire da Silva, Valdinei [Laboratorio de Tecnicas Inteligentes - LTI, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, trav.3, n.158, Cidade Universitaria Sao Paulo (Brazil); Selvatici, Antonio Henrique [Universidade Nove de Julho, Rua Vergueiro, 235, Sao Paulo (Brazil); Reali Costa, Anna Helena, E-mail: valdinei.freire@gmail.com, E-mail: antoniohps@uninove.br, E-mail: anna.reali@poli.usp.br [Laboratorio de Tecnicas Inteligentes - LTI, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Luciano Gualberto, trav.3, n.158, Cidade Universitaria Sao Paulo (Brazil)

    2011-03-01

    The task of navigating to a target position in space is a fairly common task for a mobile robot. It is desirable that this task is performed even in previously unknown environments. One reactive architecture explored before addresses this challenge by denning a hand-coded coordination of primitive behaviours, encoded by the Potential Fields method. Our first approach to improve the performance of this architecture adds a learning step to autonomously find the best way to coordinate primitive behaviours with respect to an arbitrary performance criterion. Because of the limitations presented by the Potential Fields method, especially in relation to non-convex obstacles, we are investigating the use of Relational Reinforcement Learning as a method to not only learn to act in the current environment, but also to generalise prior knowledge to the current environment in order to achieve the goal more quickly in a non-convex structured environment. We show the results of our previous efforts in reaching goal positions along with our current research on generalised approaches.

  17. Navigation towards a goal position: from reactive to generalised learned control

    International Nuclear Information System (INIS)

    Freire da Silva, Valdinei; Selvatici, Antonio Henrique; Reali Costa, Anna Helena

    2011-01-01

    The task of navigating to a target position in space is a fairly common task for a mobile robot. It is desirable that this task is performed even in previously unknown environments. One reactive architecture explored before addresses this challenge by denning a hand-coded coordination of primitive behaviours, encoded by the Potential Fields method. Our first approach to improve the performance of this architecture adds a learning step to autonomously find the best way to coordinate primitive behaviours with respect to an arbitrary performance criterion. Because of the limitations presented by the Potential Fields method, especially in relation to non-convex obstacles, we are investigating the use of Relational Reinforcement Learning as a method to not only learn to act in the current environment, but also to generalise prior knowledge to the current environment in order to achieve the goal more quickly in a non-convex structured environment. We show the results of our previous efforts in reaching goal positions along with our current research on generalised approaches.

  18. The development of automaticity in short-term memory search: Item-response learning and category learning.

    Science.gov (United States)

    Cao, Rui; Nosofsky, Robert M; Shiffrin, Richard M

    2017-05-01

    In short-term-memory (STM)-search tasks, observers judge whether a test probe was present in a short list of study items. Here we investigated the long-term learning mechanisms that lead to the highly efficient STM-search performance observed under conditions of consistent-mapping (CM) training, in which targets and foils never switch roles across trials. In item-response learning, subjects learn long-term mappings between individual items and target versus foil responses. In category learning, subjects learn high-level codes corresponding to separate sets of items and learn to attach old versus new responses to these category codes. To distinguish between these 2 forms of learning, we tested subjects in categorized varied mapping (CV) conditions: There were 2 distinct categories of items, but the assignment of categories to target versus foil responses varied across trials. In cases involving arbitrary categories, CV performance closely resembled standard varied-mapping performance without categories and departed dramatically from CM performance, supporting the item-response-learning hypothesis. In cases involving prelearned categories, CV performance resembled CM performance, as long as there was sufficient practice or steps taken to reduce trial-to-trial category-switching costs. This pattern of results supports the category-coding hypothesis for sufficiently well-learned categories. Thus, item-response learning occurs rapidly and is used early in CM training; category learning is much slower but is eventually adopted and is used to increase the efficiency of search beyond that available from item-response learning. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Role of Symbolic Coding and Rehearsal Processes in Observational Learning

    Science.gov (United States)

    Bandura, Albert; Jeffery, Robert W.

    1973-01-01

    Results were interpreted supporting a social learning view of observational learning that emphasizes contral processing of response information in the acquisition phase and motor reproduction and incentive processes in the overt enactment of what has been learned. (Author)

  20. Deep learning with Python

    CERN Document Server

    Chollet, Francois

    2018-01-01

    DESCRIPTION Deep learning is applicable to a widening range of artificial intelligence problems, such as image classification, speech recognition, text classification, question answering, text-to-speech, and optical character recognition. Deep Learning with Python is structured around a series of practical code examples that illustrate each new concept introduced and demonstrate best practices. By the time you reach the end of this book, you will have become a Keras expert and will be able to apply deep learning in your own projects. KEY FEATURES • Practical code examples • In-depth introduction to Keras • Teaches the difference between Deep Learning and AI ABOUT THE TECHNOLOGY Deep learning is the technology behind photo tagging systems at Facebook and Google, self-driving cars, speech recognition systems on your smartphone, and much more. AUTHOR BIO Francois Chollet is the author of Keras, one of the most widely used libraries for deep learning in Python. He has been working with deep neural ...

  1. A System for English Vocabulary Acquisition Based on Code-Switching

    Science.gov (United States)

    Mazur, Michal; Karolczak, Krzysztof; Rzepka, Rafal; Araki, Kenji

    2016-01-01

    Vocabulary plays an important part in second language learning and there are many existing techniques to facilitate word acquisition. One of these methods is code-switching, or mixing the vocabulary of two languages in one sentence. In this paper the authors propose an experimental system for computer-assisted English vocabulary learning in…

  2. Learning from a provisioning site: code of conduct compliance and behaviour of whale sharks in Oslob, Cebu, Philippines.

    Science.gov (United States)

    Schleimer, Anna; Araujo, Gonzalo; Penketh, Luke; Heath, Anna; McCoy, Emer; Labaja, Jessica; Lucey, Anna; Ponzo, Alessandro

    2015-01-01

    While shark-based tourism is a rapidly growing global industry, there is ongoing controversy about the effects of provisioning on the target species. This study investigated the effect of feeding on whale sharks (Rhincodon typus) at a provisioning site in Oslob, Cebu, in terms of arrival time, avoidance and feeding behaviour using photo-identification and focal follows. Additionally, compliance to the code of conduct in place was monitored to assess tourism pressure on the whale sharks. Newly identified sharks gradually arrived earlier to the provisioning site after their initial sighting, indicating that the animals learn to associate the site with food rewards. Whale sharks with a long resighting history showed anticipatory behaviour and were recorded at the site on average 5 min after the arrival of feeder boats. Results from a generalised linear mixed model indicated that animals with a longer resighting history were less likely to show avoidance behaviour to touches or boat contact. Similarly, sequential data on feeding behaviour was modelled using a generalised estimating equations approach, which suggested that experienced whale sharks were more likely to display vertical feeding behaviour. It was proposed that the continuous source of food provides a strong incentive for the modification of behaviours, i.e., learning, through conditioning. Whale sharks are large opportunistic filter feeders in a mainly oligotrophic environment, where the ability to use novel food sources by modifying their behaviour could be of great advantage. Non-compliance to the code of conduct in terms of minimum distance to the shark (2 m) increased from 79% in 2012 to 97% in 2014, suggesting a high tourism pressure on the whale sharks in Oslob. The long-term effects of the observed behavioural modifications along with the high tourism pressure remain unknown. However, management plans are traditionally based on the precautionary principle, which aims to take preventive actions even

  3. High Order Tensor Formulation for Convolutional Sparse Coding

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2017-01-01

    Convolutional sparse coding (CSC) has gained attention for its successful role as a reconstruction and a classification tool in the computer vision and machine learning community. Current CSC methods can only reconstruct singlefeature 2D images

  4. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  5. Vectorization of the KENO V.a criticality safety code

    International Nuclear Information System (INIS)

    Hollenbach, D.F.; Dodds, H.L.; Petrie, L.M.

    1991-01-01

    The development of the vector processor, which is used in the current generation of supercomputers and is beginning to be used in workstations, provides the potential for dramatic speed-up for codes that are able to process data as vectors. Unfortunately, the stochastic nature of Monte Carlo codes prevents the old scalar version of these codes from taking advantage of the vector processors. New Monte Carlo algorithms that process all the histories undergoing the same event as a batch are required. Recently, new vectorized Monte Carlo codes have been developed that show significant speed-ups when compared to the scalar version of themselves or equivalent codes. This paper discusses the vectorization of an already existing and widely used criticality safety code, KENO V.a All the changes made to KENO V.a are transparent to the user making it possible to upgrade from the standard scalar version of KENO V.a to the vectorized version without learning a new code

  6. Ethical and educational considerations in coding hand surgeries.

    Science.gov (United States)

    Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E

    2014-07-01

    To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  7. Recent developments in the CONTAIN-LMR code

    International Nuclear Information System (INIS)

    Murata, K.K.

    1990-01-01

    Through an international collaborative effort, a special version of the CONTAIN code is being developed for integrated mechanistic analysis of the conditions in liquid metal reactor (LMR) containments during severe accidents. The capabilities of the most recent code version, CONTAIN LMR/1B-Mod.1, are discussed. These include new models for the treatment of two condensables, sodium condensation on aerosols, chemical reactions, hygroscopic aerosols, and concrete outgassing. This code version also incorporates all of the previously released LMR model enhancements. The results of an integral demonstration calculation of a sever core-melt accident scenario are given to illustrate the features of this code version. 11 refs., 7 figs., 1 tab

  8. Women with learning disabilities and access to cervical screening: retrospective cohort study using case control methods

    Directory of Open Access Journals (Sweden)

    Stanistreet Debbi

    2008-01-01

    Full Text Available Abstract Background Several studies in the UK have suggested that women with learning disabilities may be less likely to receive cervical screening tests and a previous local study in had found that GPs considered screening unnecessary for women with learning disabilities. This study set out to ascertain whether women with learning disabilities are more likely to be ceased from a cervical screening programme than women without; and to examine the reasons given for ceasing women with learning disabilities. It was carried out in Bury, Heywood-and-Middleton and Rochdale. Methods Carried out using retrospective cohort study methods, women with learning disabilities were identified by Read code; and their cervical screening records were compared with the Call-and-Recall records of women without learning disabilities in order to examine their screening histories. Analysis was carried out using case-control methods – 1:2 (women with learning disabilities: women without learning disabilities, calculating odds ratios. Results 267 women's records were compared with the records of 534 women without learning disabilities. Women with learning disabilities had an odds ratio (OR of 0.48 (Confidence Interval (CI 0.38 – 0.58; X2: 72.227; p.value X2: 24.236; p.value X2: 286.341; p.value Conclusion The reasons given for ceasing and/or not screening suggest that merely being coded as having a learning disability is not the sole reason for these actions. There are training needs among smear takers regarding appropriate reasons not to screen and providing screening for women with learning disabilities.

  9. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  10. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  11. The Role of Code-Switching in Bilingual Creativity

    Science.gov (United States)

    Kharkhurin, Anatoliy V.; Wei, Li

    2015-01-01

    This study further explores the theme of bilingual creativity with the present focus on code-switching. Specifically, it investigates whether code-switching practice has an impact on creativity. In line with the previous research, selective attention was proposed as a potential cognitive mechanism, which on the one hand would benefit from…

  12. Secret Codes, Remainder Arithmetic, and Matrices.

    Science.gov (United States)

    Peck, Lyman C.

    This pamphlet is designed for use as enrichment material for able junior and senior high school students who are interested in mathematics. No more than a clear understanding of basic arithmetic is expected. Students are introduced to ideas from number theory and modern algebra by learning mathematical ways of coding and decoding secret messages.…

  13. Understanding infants' and children's social learning about foods: previous research and new prospects.

    Science.gov (United States)

    Shutts, Kristin; Kinzler, Katherine D; DeJesus, Jasmine M

    2013-03-01

    Developmental psychologists have devoted significant attention to investigating how children learn from others' actions, emotions, and testimony. Yet most of this research has examined children's socially guided learning about artifacts. The present article focuses on a domain that has received limited attention from those interested in the development of social cognition: food. We begin by reviewing the available literature on infants' and children's development in the food domain and identify situations in which children evidence both successes and failures in their interactions with foods. We focus specifically on the role that other people play in guiding what children eat and argue that understanding patterns of successes and failures in the food domain requires an appreciation of eating as a social phenomenon. We next propose a series of questions for future research and suggest that examining food selection as a social phenomenon can shed light on mechanisms underlying children's learning from others and provide ideas for promoting healthy social relationships and eating behaviors early in development.

  14. A model of self-directed learning in internal medicine residency: a qualitative study using grounded theory.

    Science.gov (United States)

    Sawatsky, Adam P; Ratelle, John T; Bonnes, Sara L; Egginton, Jason S; Beckman, Thomas J

    2017-02-02

    Existing theories of self-directed learning (SDL) have emphasized the importance of process, personal, and contextual factors. Previous medical education research has largely focused on the process of SDL. We explored the experience with and perception of SDL among internal medicine residents to gain understanding of the personal and contextual factors of SDL in graduate medical education. Using a constructivist grounded theory approach, we conducted 7 focus group interviews with 46 internal medicine residents at an academic medical center. We processed the data by using open coding and writing analytic memos. Team members organized open codes to create axial codes, which were applied to all transcripts. Guided by a previous model of SDL, we developed a theoretical model that was revised through constant comparison with new data as they were collected, and we refined the theory until it had adequate explanatory power and was appropriately grounded in the experiences of residents. We developed a theoretical model of SDL to explain the process, personal, and contextual factors affecting SDL during residency training. The process of SDL began with a trigger that uncovered a knowledge gap. Residents progressed to formulating learning objectives, using resources, applying knowledge, and evaluating learning. Personal factors included motivations, individual characteristics, and the change in approach to SDL over time. Contextual factors included the need for external guidance, the influence of residency program structure and culture, and the presence of contextual barriers. We developed a theoretical model of SDL in medical education that can be used to promote and assess resident SDL through understanding the process, person, and context of SDL.

  15. Learning scikit-learn machine learning in Python

    CERN Document Server

    Garreta, Raúl

    2013-01-01

    The book adopts a tutorial-based approach to introduce the user to Scikit-learn.If you are a programmer who wants to explore machine learning and data-based methods to build intelligent applications and enhance your programming skills, this the book for you. No previous experience with machine-learning algorithms is required.

  16. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  17. A new release of the S3M code

    International Nuclear Information System (INIS)

    Pavlovic, M.; Bokor, J.; Regodic, M.; Sagatova, A.

    2015-01-01

    This paper presents a new release of the code that contains some additional routines and advanced features of displaying the results. Special attention is paid to the processing of the SRIM range file, which was not included in the previous release of the code. Examples of distributions provided by the S 3 M code for implanted ions in thyroid and iron are presented. (authors)

  18. Writing robust C++ code for critical applications

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    **C++** is one of the most **complex**, expressive and powerful languages out there. However, its complexity makes it hard to write **robust** code. When using C++ to code **critical** applications, ensuring **reliability** is one of the key topics. Testing, debugging and profiling are all a major part of this kind of work. In the BE department we use C++ to write a big part of the controls system for beam operation, which implies putting a big focus on system stability and ensuring smooth operation. This talk will try to: - Highlight potential problems when writing C++ code, giving guidelines on writing defensive code that could have avoided such issues - Explain how to avoid common pitfalls (both in writing C++ code and at the debugging & profiling phase) - Showcase some tools and tricks useful to C++ development The attendees' proficiency in C++ should not be a concern. Anyone is free to join, even people that do not know C++, if only to learn the pitfalls a language may have. This may benefit f...

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. PLASMOR: A laser-plasma simulation code. Pt. 2

    International Nuclear Information System (INIS)

    Salzman, D.; Krumbein, A.D.; Szichman, H.

    1987-06-01

    This report supplements a previous one which describes the PLASMOR hydrodynamics code. The present report documents the recent changes and additions made in the code. In particular described are two new subroutines for radiative preheat, a system of preprocessors which prepare the code before run, a list of postprocessors which simulate experimental setups, and the basic data sets required to run PLASMOR. In the Appendix a new computer-based manual which lists the main features of PLASMOR is reproduced

  1. Overlaid Alice: a statistical model computer code including fission and preequilibrium models

    International Nuclear Information System (INIS)

    Blann, M.

    1976-01-01

    The most recent edition of an evaporation code originally written previously with frequent updating and improvement. This version replaces the version Alice described previously. A brief summary is given of the types of calculations which can be done. A listing of the code and the results of several sample calculations are presented

  2. Tensor Dictionary Learning for Positive Definite Matrices.

    Science.gov (United States)

    Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2015-11-01

    Sparse models have proven to be extremely successful in image processing and computer vision. However, a majority of the effort has been focused on sparse representation of vectors and low-rank models for general matrices. The success of sparse modeling, along with popularity of region covariances, has inspired the development of sparse coding approaches for these positive definite descriptors. While in earlier work, the dictionary was formed from all, or a random subset of, the training signals, it is clearly advantageous to learn a concise dictionary from the entire training set. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between sparse coding and dictionary update stages, and different atom update methods are described. A discriminative version of the dictionary learning approach is also proposed, which simultaneously learns dictionaries for different classes in classification or clustering. Experimental results demonstrate the advantage of learning dictionaries from data both from reconstruction and classification viewpoints. Finally, a software library is presented comprising C++ binaries for all the positive definite sparse coding and dictionary learning approaches presented here.

  3. Expression of c-Fos in the rat retrosplenial cortex during instrumental re-learning of appetitive bar-pressing depends on the number of stages of previous training

    Science.gov (United States)

    Svarnik, Olga E.; Bulava, Alexandra I.; Alexandrov, Yuri I.

    2013-01-01

    Learning is known to be accompanied by induction of c-Fos expression in cortical neurons. However, not all neurons are involved in this process. What the c-Fos expression pattern depends on is still unknown. In the present work we studied whether and to what degree previous animal experience about Task 1 (the first phase of an instrumental learning) influenced neuronal c-Fos expression in the retrosplenial cortex during acquisition of Task 2 (the second phase of an instrumental learning). Animals were progressively shaped across days to bar-press for food at the left side of the experimental chamber (Task 1). This appetitive bar-pressing behavior was shaped by nine stages (“9 stages” group), five stages (“5 stages” group) or one intermediate stage (“1 stage” group). After all animals acquired the first skill and practiced it for five days, the bar and feeder on the left, familiar side of the chamber were inactivated, and the animals were allowed to learn a similar instrumental task at the opposite side of the chamber using another pair of a bar and a feeder (Task 2). The highest number of c-Fos positive neurons was found in the retrosplenial cortex of “1 stage” animals as compared to the other groups. The number of c-Fos positive neurons in “5 stages” group animals was significantly lower than in “1 stage” animals and significantly higher than in “9 stages” animals. The number of c-Fos positive neurons in the cortex of “9 stages” animals was significantly higher than in home caged control animals. At the same time, there were no significant differences between groups in such behavioral variables as the number of entrees into the feeder or bar zones during Task 2 learning. Our results suggest that c-Fos expression in the retrosplenial cortex during Task 2 acquisition was influenced by the previous learning history. PMID:23847484

  4. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  5. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  6. Parallelization of Subchannel Analysis Code MATRA

    International Nuclear Information System (INIS)

    Kim, Seongjin; Hwang, Daehyun; Kwon, Hyouk

    2014-01-01

    A stand-alone calculation of MATRA code used up pertinent computing time for the thermal margin calculations while a relatively considerable time is needed to solve the whole core pin-by-pin problems. In addition, it is strongly required to improve the computation speed of the MATRA code to satisfy the overall performance of the multi-physics coupling calculations. Therefore, a parallel approach to improve and optimize the computability of the MATRA code is proposed and verified in this study. The parallel algorithm is embodied in the MATRA code using the MPI communication method and the modification of the previous code structure was minimized. An improvement is confirmed by comparing the results between the single and multiple processor algorithms. The speedup and efficiency are also evaluated when increasing the number of processors. The parallel algorithm was implemented to the subchannel code MATRA using the MPI. The performance of the parallel algorithm was verified by comparing the results with those from the MATRA with the single processor. It is also noticed that the performance of the MATRA code was greatly improved by implementing the parallel algorithm for the 1/8 core and whole core problems

  7. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    Science.gov (United States)

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  8. Speech and audio processing for coding, enhancement and recognition

    CERN Document Server

    Togneri, Roberto; Narasimha, Madihally

    2015-01-01

    This book describes the basic principles underlying the generation, coding, transmission and enhancement of speech and audio signals, including advanced statistical and machine learning techniques for speech and speaker recognition with an overview of the key innovations in these areas. Key research undertaken in speech coding, speech enhancement, speech recognition, emotion recognition and speaker diarization are also presented, along with recent advances and new paradigms in these areas. ·         Offers readers a single-source reference on the significant applications of speech and audio processing to speech coding, speech enhancement and speech/speaker recognition. Enables readers involved in algorithm development and implementation issues for speech coding to understand the historical development and future challenges in speech coding research; ·         Discusses speech coding methods yielding bit-streams that are multi-rate and scalable for Voice-over-IP (VoIP) Networks; ·     �...

  9. Automated searching for quantum subsystem codes

    International Nuclear Information System (INIS)

    Crosswhite, Gregory M.; Bacon, Dave

    2011-01-01

    Quantum error correction allows for faulty quantum systems to behave in an effectively error-free manner. One important class of techniques for quantum error correction is the class of quantum subsystem codes, which are relevant both to active quantum error-correcting schemes as well as to the design of self-correcting quantum memories. Previous approaches for investigating these codes have focused on applying theoretical analysis to look for interesting codes and to investigate their properties. In this paper we present an alternative approach that uses computational analysis to accomplish the same goals. Specifically, we present an algorithm that computes the optimal quantum subsystem code that can be implemented given an arbitrary set of measurement operators that are tensor products of Pauli operators. We then demonstrate the utility of this algorithm by performing a systematic investigation of the quantum subsystem codes that exist in the setting where the interactions are limited to two-body interactions between neighbors on lattices derived from the convex uniform tilings of the plane.

  10. Influence of Action-Effect Associations Acquired by Ideomotor Learning on Imitation

    Science.gov (United States)

    Bunlon, Frédérique; Marshall, Peter J.; Quandt, Lorna C.; Bouquet, Cedric A.

    2015-01-01

    According to the ideomotor theory, actions are represented in terms of their perceptual effects, offering a solution for the correspondence problem of imitation (how to translate the observed action into a corresponding motor output). This effect-based coding of action is assumed to be acquired through action-effect learning. Accordingly, performing an action leads to the integration of the perceptual codes of the action effects with the motor commands that brought them about. While ideomotor theory is invoked to account for imitation, the influence of action-effect learning on imitative behavior remains unexplored. In two experiments, imitative performance was measured in a reaction time task following a phase of action-effect acquisition. During action-effect acquisition, participants freely executed a finger movement (index or little finger lifting), and then observed a similar (compatible learning) or a different (incompatible learning) movement. In Experiment 1, finger movements of left and right hands were presented as action-effects during acquisition. In Experiment 2, only right-hand finger movements were presented during action-effect acquisition and in the imitation task the observed hands were oriented orthogonally to participants’ hands in order to avoid spatial congruency effects. Experiments 1 and 2 showed that imitative performance was improved after compatible learning, compared to incompatible learning. In Experiment 2, although action-effect learning involved perception of finger movements of right hand only, imitative capabilities of right- and left-hand finger movements were equally affected. These results indicate that an observed movement stimulus processed as the effect of an action can later prime execution of that action, confirming the ideomotor approach to imitation. We further discuss these findings in relation to previous studies of action-effect learning and in the framework of current ideomotor approaches to imitation. PMID:25793755

  11. Multimedia distribution using network coding on the iphone platform

    DEFF Research Database (Denmark)

    Vingelmann, Peter; Pedersen, Morten Videbæk; Fitzek, Frank

    2010-01-01

    This paper looks into the implementation details of random linear network coding on the Apple iPhone and iPod Touch mobile platforms for multimedia distribution. Previous implementations of network coding on this platform failed to achieve a throughput which is sufficient to saturate the WLAN...

  12. Evolving a Dynamic Predictive Coding Mechanism for Novelty Detection

    OpenAIRE

    Haggett, Simon J.; Chu, Dominique; Marshall, Ian W.

    2007-01-01

    Novelty detection is a machine learning technique which identifies new or unknown information in data sets. We present our current work on the construction of a new novelty detector based on a dynamical version of predictive coding. We compare three evolutionary algorithms, a simple genetic algorithm, NEAT and FS-NEAT, for the task of optimising the structure of an illustrative dynamic predictive coding neural network to improve its performance over stimuli from a number of artificially gener...

  13. Case studies in Gaussian process modelling of computer codes

    International Nuclear Information System (INIS)

    Kennedy, Marc C.; Anderson, Clive W.; Conti, Stefano; O'Hagan, Anthony

    2006-01-01

    In this paper we present a number of recent applications in which an emulator of a computer code is created using a Gaussian process model. Tools are then applied to the emulator to perform sensitivity analysis and uncertainty analysis. Sensitivity analysis is used both as an aid to model improvement and as a guide to how much the output uncertainty might be reduced by learning about specific inputs. Uncertainty analysis allows us to reflect output uncertainty due to unknown input parameters, when the finished code is used for prediction. The computer codes themselves are currently being developed within the UK Centre for Terrestrial Carbon Dynamics

  14. Symbol manipulation and rule learning in spiking neuronal networks.

    Science.gov (United States)

    Fernando, Chrisantha

    2011-04-21

    It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. User Manual for the NASA Glenn Ice Accretion Code LEWICE: Version 2.0

    Science.gov (United States)

    Wright, William B.

    1999-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.0 of this code, which is called LEWICE. This version differs from previous releases due to its robustness and its ability to reproduce results accurately for different spacing and time step criteria across computing platform. It also differs in the extensive effort undertaken to compare the results against the database of ice shapes which have been generated in the NASA Glenn Icing Research Tunnel (IRT) 1. This report will only describe the features of the code related to the use of the program. The report will not describe the inner working of the code or the physical models used. This information is available in the form of several unpublished documents which will be collectively referred to as a Programmers Manual for LEWICE 2 in this report. These reports are intended as an update/replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  16. Learning during Processing: Word Learning Doesn't Wait for Word Recognition to Finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed…

  17. Use of ICD-10 codes to monitor uterine rupture

    DEFF Research Database (Denmark)

    Thisted, Dorthe L A; Mortensen, Laust Hvas; Hvidman, Lone

    2014-01-01

    OBJECTIVES: Uterine rupture is a rare but severe complication in pregnancies after a previous cesarean section. In Denmark, the monitoring of uterine rupture is based on reporting of relevant diagnostic codes to the Danish Medical Birth Registry (MBR). The aim of our study was to examine the vali......OBJECTIVES: Uterine rupture is a rare but severe complication in pregnancies after a previous cesarean section. In Denmark, the monitoring of uterine rupture is based on reporting of relevant diagnostic codes to the Danish Medical Birth Registry (MBR). The aim of our study was to examine...... uterine ruptures, the sensitivity and specificity of the codes for uterine rupture were 83.8% and 99.1%, respectively. CONCLUSION: During the study period the monitoring of uterine rupture in the MBR was inadequate....

  18. The evolution of the mitochondrial genetic code in arthropods revisited.

    Science.gov (United States)

    Abascal, Federico; Posada, David; Zardoya, Rafael

    2012-04-01

    A variant of the invertebrate mitochondrial genetic code was previously identified in arthropods (Abascal et al. 2006a, PLoS Biol 4:e127) in which, instead of translating the AGG codon as serine, as in other invertebrates, some arthropods translate AGG as lysine. Here, we revisit the evolution of the genetic code in arthropods taking into account that (1) the number of arthropod mitochondrial genomes sequenced has triplicated since the original findings were published; (2) the phylogeny of arthropods has been recently resolved with confidence for many groups; and (3) sophisticated probabilistic methods can be applied to analyze the evolution of the genetic code in arthropod mitochondria. According to our analyses, evolutionary shifts in the genetic code have been more common than previously inferred, with many taxonomic groups displaying two alternative codes. Ancestral character-state reconstruction using probabilistic methods confirmed that the arthropod ancestor most likely translated AGG as lysine. Point mutations at tRNA-Lys and tRNA-Ser correlated with the meaning of the AGG codon. In addition, we identified three variables (GC content, number of AGG codons, and taxonomic information) that best explain the use of each of the two alternative genetic codes.

  19. Alternatively Constrained Dictionary Learning For Image Superresolution.

    Science.gov (United States)

    Lu, Xiaoqiang; Yuan, Yuan; Yan, Pingkun

    2014-03-01

    Dictionaries are crucial in sparse coding-based algorithm for image superresolution. Sparse coding is a typical unsupervised learning method to study the relationship between the patches of high-and low-resolution images. However, most of the sparse coding methods for image superresolution fail to simultaneously consider the geometrical structure of the dictionary and the corresponding coefficients, which may result in noticeable superresolution reconstruction artifacts. In other words, when a low-resolution image and its corresponding high-resolution image are represented in their feature spaces, the two sets of dictionaries and the obtained coefficients have intrinsic links, which has not yet been well studied. Motivated by the development on nonlocal self-similarity and manifold learning, a novel sparse coding method is reported to preserve the geometrical structure of the dictionary and the sparse coefficients of the data. Moreover, the proposed method can preserve the incoherence of dictionary entries and provide the sparse coefficients and learned dictionary from a new perspective, which have both reconstruction and discrimination properties to enhance the learning performance. Furthermore, to utilize the model of the proposed method more effectively for single-image superresolution, this paper also proposes a novel dictionary-pair learning method, which is named as two-stage dictionary training. Extensive experiments are carried out on a large set of images comparing with other popular algorithms for the same purpose, and the results clearly demonstrate the effectiveness of the proposed sparse representation model and the corresponding dictionary learning algorithm.

  20. ClinicalCodes: an online clinical codes repository to improve the validity and reproducibility of research using electronic medical records.

    Science.gov (United States)

    Springate, David A; Kontopantelis, Evangelos; Ashcroft, Darren M; Olier, Ivan; Parisi, Rosa; Chamapiwa, Edmore; Reeves, David

    2014-01-01

    Lists of clinical codes are the foundation for research undertaken using electronic medical records (EMRs). If clinical code lists are not available, reviewers are unable to determine the validity of research, full study replication is impossible, researchers are unable to make effective comparisons between studies, and the construction of new code lists is subject to much duplication of effort. Despite this, the publication of clinical codes is rarely if ever a requirement for obtaining grants, validating protocols, or publishing research. In a representative sample of 450 EMR primary research articles indexed on PubMed, we found that only 19 (5.1%) were accompanied by a full set of published clinical codes and 32 (8.6%) stated that code lists were available on request. To help address these problems, we have built an online repository where researchers using EMRs can upload and download lists of clinical codes. The repository will enable clinical researchers to better validate EMR studies, build on previous code lists and compare disease definitions across studies. It will also assist health informaticians in replicating database studies, tracking changes in disease definitions or clinical coding practice through time and sharing clinical code information across platforms and data sources as research objects.

  1. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  2. Seismic Signal Compression Using Nonparametric Bayesian Dictionary Learning via Clustering

    Directory of Open Access Journals (Sweden)

    Xin Tian

    2017-06-01

    Full Text Available We introduce a seismic signal compression method based on nonparametric Bayesian dictionary learning method via clustering. The seismic data is compressed patch by patch, and the dictionary is learned online. Clustering is introduced for dictionary learning. A set of dictionaries could be generated, and each dictionary is used for one cluster’s sparse coding. In this way, the signals in one cluster could be well represented by their corresponding dictionaries. A nonparametric Bayesian dictionary learning method is used to learn the dictionaries, which naturally infers an appropriate dictionary size for each cluster. A uniform quantizer and an adaptive arithmetic coding algorithm are adopted to code the sparse coefficients. With comparisons to other state-of-the art approaches, the effectiveness of the proposed method could be validated in the experiments.

  3. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  4. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  5. Aids to Computer-Based Multimedia Learning.

    Science.gov (United States)

    Mayer, Richard E.; Moreno, Roxana

    2002-01-01

    Presents a cognitive theory of multimedia learning that draws on dual coding theory, cognitive load theory, and constructivist learning theory and derives some principles of instructional design for fostering multimedia learning. These include principles of multiple representation, contiguity, coherence, modality, and redundancy. (SLD)

  6. Temporal-pattern learning in neural models

    CERN Document Server

    Genís, Carme Torras

    1985-01-01

    While the ability of animals to learn rhythms is an unquestionable fact, the underlying neurophysiological mechanisms are still no more than conjectures. This monograph explores the requirements of such mechanisms, reviews those previously proposed and postulates a new one based on a direct electric coding of stimulation frequencies. Experi­ mental support for the option taken is provided both at the single neuron and neural network levels. More specifically, the material presented divides naturally into four parts: a description of the experimental and theoretical framework where this work becomes meaningful (Chapter 2), a detailed specifica­ tion of the pacemaker neuron model proposed together with its valida­ tion through simulation (Chapter 3), an analytic study of the behavior of this model when submitted to rhythmic stimulation (Chapter 4) and a description of the neural network model proposed for learning, together with an analysis of the simulation results obtained when varying seve­ ral factors r...

  7. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  8. Cavitation Modeling in Euler and Navier-Stokes Codes

    Science.gov (United States)

    Deshpande, Manish; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    Many previous researchers have modeled sheet cavitation by means of a constant pressure solution in the cavity region coupled with a velocity potential formulation for the outer flow. The present paper discusses the issues involved in extending these cavitation models to Euler or Navier-Stokes codes. The approach taken is to start from a velocity potential model to ensure our results are compatible with those of previous researchers and available experimental data, and then to implement this model in both Euler and Navier-Stokes codes. The model is then augmented in the Navier-Stokes code by the inclusion of the energy equation which allows the effect of subcooling in the vicinity of the cavity interface to be modeled to take into account the experimentally observed reduction in cavity pressures that occurs in cryogenic fluids such as liquid hydrogen. Although our goal is to assess the practicality of implementing these cavitation models in existing three-dimensional, turbomachinery codes, the emphasis in the present paper will center on two-dimensional computations, most specifically isolated airfoils and cascades. Comparisons between velocity potential, Euler and Navier-Stokes implementations indicate they all produce consistent predictions. Comparisons with experimental results also indicate that the predictions are qualitatively correct and give a reasonable first estimate of sheet cavitation effects in both cryogenic and non-cryogenic fluids. The impact on CPU time and the code modifications required suggests that these models are appropriate for incorporation in current generation turbomachinery codes.

  9. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Rollstin, J.A.; Chanin, D.I.; Jow, H.N.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management

  10. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  11. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T.; Rollstin, J.A.; Chanin, D.I.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs

  12. Double-digit coding of examination math problems

    Directory of Open Access Journals (Sweden)

    Agnieszka Sułowska

    2013-09-01

    Full Text Available Various methods are used worldwide to evaluate student solutions to examination tasks. Usually the results simply provide information about student competency and after aggregation, are also used as a tool of making comparisons between schools. In particular, the standard evaluation methods do not allow conclusions to be drawn about possible improvements of teaching methods. There are however, task assessment methods which not only allow description of student achievement, but also possible causes of failure. One such method, which can be applied to extended response tasks, is double-digit coding which has been used in some international educational research. This paper presents the first Polish experiences of applying this method to examination tasks in mathematics, using a special coding key to carry out the evaluation. Lessons learned during the coding key construction and its application in the assessment process are described.

  13. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  14. Improving coding accuracy in an academic practice.

    Science.gov (United States)

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  15. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  16. On Identifying which Intermediate Nodes Should Code in Multicast Networks

    DEFF Research Database (Denmark)

    Pinto, Tiago; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2013-01-01

    the data packets. Previous work has shown that in lossless wireline networks, the performance of tree-packing mechanisms is comparable to network coding, albeit with added complexity at the time of computing the trees. This means that most nodes in the network need not code. Thus, mechanisms that identify...... intermediate nodes that do require coding is instrumental for the efficient operation of coded networks and can have a significant impact in overall energy consumption. We present a distributed, low complexity algorithm that allows every node to identify if it should code and, if so, through what output link...

  17. Interactive QR code beautification with full background image embedding

    Science.gov (United States)

    Lin, Lijian; Wu, Song; Liu, Sijiang; Jiang, Bo

    2017-06-01

    QR (Quick Response) code is a kind of two dimensional barcode that was first developed in automotive industry. Nowadays, QR code has been widely used in commercial applications like product promotion, mobile payment, product information management, etc. Traditional QR codes in accordance with the international standard are reliable and fast to decode, but are lack of aesthetic appearance to demonstrate visual information to customers. In this work, we present a novel interactive method to generate aesthetic QR code. By given information to be encoded and an image to be decorated as full QR code background, our method accepts interactive user's strokes as hints to remove undesired parts of QR code modules based on the support of QR code error correction mechanism and background color thresholds. Compared to previous approaches, our method follows the intention of the QR code designer, thus can achieve more user pleasant result, while keeping high machine readability.

  18. Student perception of travel service learning experience in Morocco.

    Science.gov (United States)

    Puri, Aditi; Kaddoura, Mahmoud; Dominick, Christine

    2013-08-01

    This study explores the perceptions of health profession students participating in academic service learning in Morocco with respect to adapting health care practices to cultural diversity. Authors utilized semi-structured, open-ended interviews to explore the perceptions of health profession students. Nine dental hygiene and nursing students who traveled to Morocco to provide oral and general health services were interviewed. After interviews were recorded, they were transcribed verbatim to ascertain descriptive validity and to generate inductive and deductive codes that constitute the major themes of the data analysis. Thereafter, NVIVO 8 was used to rapidly determine the frequency of applied codes. The authors compared the codes and themes to establish interpretive validity. Codes and themes were initially determined independently by co-authors and applied to the data subsequently. The authors compared the applied codes to establish intra-rater reliability. International service learning experiences led to perceptions of growth as a health care provider among students. The application of knowledge and skills learned in academic programs and service learning settings were found to help in bridging the theory-practice gap. The specific experience enabled students to gain an understanding of diverse health care and cultural practices in Morocco. Students perceived that the experience gained in international service learning can heighten awareness of diverse cultural and health care practices to foster professional growth of health professionals.

  19. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Chanin, D.I.; Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems

  20. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  1. Professionals learning together with patients: An exploratory study of a collaborative learning Fellowship programme for healthcare improvement.

    Science.gov (United States)

    Myron, Rowan; French, Catherine; Sullivan, Paul; Sathyamoorthy, Ganesh; Barlow, James; Pomeroy, Linda

    2018-05-01

    Improving the quality of healthcare involves collaboration between many different stakeholders. Collaborative learning theory suggests that teaching different professional groups alongside each other may enable them to develop skills in how to collaborate effectively, but there is little literature on how this works in practice. Further, though it is recognised that patients play a fundamental role in quality improvement, there are few examples of where they learn together with professionals. To contribute to addressing this gap, we review a collaborative fellowship in Northwest London, designed to build capacity to improve healthcare, which enabled patients and professionals to learn together. Using the lens of collaborative learning, we conducted an exploratory study of six cohorts of the year long programme (71 participants). Data were collected using open text responses from an online survey (n = 31) and semi-structured interviews (n = 34) and analysed using an inductive open coding approach. The collaborative design of the Fellowship, which included bringing multiple perspectives to discussions of real world problems, was valued by participants who reflected on the safe, egalitarian space created by the programme. Participants (healthcare professionals and patients) found this way of learning initially challenging yet ultimately productive. Despite the pedagogical and practical challenges of developing a collaborative programme, this study indicates that opening up previously restricted learning opportunities as widely as possible, to include patients and carers, is an effective mechanism to develop collaborative skills for quality improvement.

  2. Beyond blended learning! Undiscovered potentials for e-learning in organizational learning

    DEFF Research Database (Denmark)

    Bang, Jørgen; Dalsgaard, Christian; Kjær, Arne

    2007-01-01

    The basic question raised in this article is: Is pure e-learning able to support learning in organizations better today than 4-5 years ago? Based on two case studies on blended learning courses for company training, the article discusses whether use of new Web 2.0 and social software tools may help...... overcome previous limitations of e-learning....

  3. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    Science.gov (United States)

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer

  4. Administrative database code accuracy did not vary notably with changes in disease prevalence.

    Science.gov (United States)

    van Walraven, Carl; English, Shane; Austin, Peter C

    2016-11-01

    Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Machine-Learning Algorithms to Code Public Health Spending Accounts.

    Science.gov (United States)

    Brady, Eoghan S; Leider, Jonathon P; Resnick, Beth A; Alfonso, Y Natalia; Bishai, David

    Government public health expenditure data sets require time- and labor-intensive manipulation to summarize results that public health policy makers can use. Our objective was to compare the performances of machine-learning algorithms with manual classification of public health expenditures to determine if machines could provide a faster, cheaper alternative to manual classification. We used machine-learning algorithms to replicate the process of manually classifying state public health expenditures, using the standardized public health spending categories from the Foundational Public Health Services model and a large data set from the US Census Bureau. We obtained a data set of 1.9 million individual expenditure items from 2000 to 2013. We collapsed these data into 147 280 summary expenditure records, and we followed a standardized method of manually classifying each expenditure record as public health, maybe public health, or not public health. We then trained 9 machine-learning algorithms to replicate the manual process. We calculated recall, precision, and coverage rates to measure the performance of individual and ensembled algorithms. Compared with manual classification, the machine-learning random forests algorithm produced 84% recall and 91% precision. With algorithm ensembling, we achieved our target criterion of 90% recall by using a consensus ensemble of ≥6 algorithms while still retaining 93% coverage, leaving only 7% of the summary expenditure records unclassified. Machine learning can be a time- and cost-saving tool for estimating public health spending in the United States. It can be used with standardized public health spending categories based on the Foundational Public Health Services model to help parse public health expenditure information from other types of health-related spending, provide data that are more comparable across public health organizations, and evaluate the impact of evidence-based public health resource allocation.

  6. Verification of gyrokinetic microstability codes with an LHD configuration

    Energy Technology Data Exchange (ETDEWEB)

    Mikkelsen, D. R. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Nunami, M. [National Inst. for Fusion Science (Japan); Watanabe, T. -H. [Nagoya Univ. (Japan); Sugama, H. [National Inst. for Fusion Science (Japan); Tanaka, K. [National Inst. for Fusion Science (Japan)

    2014-11-01

    We extend previous benchmarks of the GS2 and GKV-X codes to verify their algorithms for solving the gyrokinetic Vlasov-Poisson equations for plasma microturbulence. Code benchmarks are the most complete way of verifying the correctness of implementations for the solution of mathematical models for complex physical processes such as those studied here. The linear stability calculations reported here are based on the plasma conditions of an ion-ITB plasma in the LHD configuration. The plasma parameters and the magnetic geometry differ from previous benchmarks involving these codes. We find excellent agreement between the independently written pre-processors that calculate the geometrical coefficients used in the gyrokinetic equations. Grid convergence tests are used to establish the resolution and domain size needed to obtain converged linear stability results. The agreement of the frequencies, growth rates and eigenfunctions in the benchmarks reported here provides additional verification that the algorithms used by the GS2 and GKV-X codes are correctly finding the linear eigenvalues and eigenfunctions of the gyrokinetic Vlasov-Poisson equations.

  7. Monte-Carlo code calculation of 3D reactor core model with usage of burnt fuel isotopic compositions, obtained by engineering codes

    Energy Technology Data Exchange (ETDEWEB)

    Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2016-09-15

    A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.

  8. Cognitive Architectures for Multimedia Learning

    Science.gov (United States)

    Reed, Stephen K.

    2006-01-01

    This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio's dual coding theory,…

  9. Learning to Act Like a Lawyer: A Model Code of Professional Responsibility for Law Students

    Directory of Open Access Journals (Sweden)

    David M. Tanovich

    2009-02-01

    Full Text Available Law students are the future of the legal profession. How well prepared are they when they leave law school to assume the professional and ethical obligations that they owe themselves, the profession and the public? This question has led to a growing interest in Canada in the teaching of legal ethics. It is also led to a greater emphasis on the development of clinical and experiential learning as exemplified in the scholarship and teaching of Professor Rose Voyvodic. Less attention, however, has been placed on identifying the general ethical responsibilities of law students when not working in a clinic or other legal context. This can be seen in the presence of very few Canadian articles exploring the issue, and more significantly, in the paucity of law school discipline policies or codes of conduct that set out the professional obligations owed by law students. This article develops an idea that Professor Voyvodic and I talked about on a number of occasions. It argues that all law schools should have a code of conduct which is separate and distinct from their general University code and which resembles, with appropriate modifications, the relevant set of rules of professional responsibility law students will be bound by when called to the Bar. A student code of conduct which educates law students about their professional obligations is an important step in deterring such conduct while in law school and preparing students for ethical practice. The idea of a law school code of professional responsibility raises a number of questions. Why is it necessary for law schools to have their own student code of conduct? The article provides a threefold response. First, law students are members of the legal profession and a code of conduct should reflect this. Second, it must be relevant and comprehensive in order to ensure that it can inspire students to be ethical lawyers. And, third, as a practical matter, the last few years have witnessed a number of

  10. Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on

  11. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.

    2001-01-01

    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained....... The paper describes the code format, the stochastic models and the resulting optimised partial safety factors....

  12. Using Quick Response Codes in the Classroom: Quality Outcomes.

    Science.gov (United States)

    Zurmehly, Joyce; Adams, Kellie

    2017-10-01

    With smart device technology emerging, educators are challenged with redesigning teaching strategies using technology to allow students to participate dynamically and provide immediate answers. To facilitate integration of technology and to actively engage students, quick response codes were included in a medical surgical lecture. Quick response codes are two-dimensional square patterns that enable the coding or storage of more than 7000 characters that can be accessed via a quick response code scanning application. The aim of this quasi-experimental study was to explore quick response code use in a lecture and measure students' satisfaction (met expectations, increased interest, helped understand, and provided practice and prompt feedback) and engagement (liked most, liked least, wanted changed, and kept involved), assessed using an investigator-developed instrument. Although there was no statistically significant correlation of quick response use to examination scores, satisfaction scores were high, and there was a small yet positive association between how students perceived their learning with quick response codes and overall examination scores. Furthermore, on open-ended survey questions, students responded that they were satisfied with the use of quick response codes, appreciated the immediate feedback, and planned to use them in the clinical setting. Quick response codes offer a way to integrate technology into the classroom to provide students with instant positive feedback.

  13. Decoding of interleaved Reed-Solomon codes using improved power decoding

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde ne Nielsen, Johan

    2017-01-01

    We propose a new partial decoding algorithm for m-interleaved Reed-Solomon (IRS) codes that can decode, with high probability, a random error of relative weight 1 − Rm/m+1 at all code rates R, in time polynomial in the code length n. For m > 2, this is an asymptotic improvement over the previous...... state-of-the-art for all rates, and the first improvement for R > 1/3 in the last 20 years. The method combines collaborative decoding of IRS codes with power decoding up to the Johnson radius....

  14. Pre-Service Teachers' Perception of Quick Response (QR) Code Integration in Classroom Activities

    Science.gov (United States)

    Ali, Nagla; Santos, Ieda M.; Areepattamannil, Shaljan

    2017-01-01

    Quick Response (QR) codes have been discussed in the literature as adding value to teaching and learning. Despite their potential in education, more research is needed to inform practice and advance knowledge in this field. This paper investigated the integration of the QR code in classroom activities and the perceptions of the integration by…

  15. Development of EASYQAD version β: A Visualization Code System for QAD-CGGP-A Gamma and Neutron Shielding Calculation Code

    International Nuclear Information System (INIS)

    Kim, Jae Cheon; Lee, Hwan Soo; Ha, Pham Nhu Viet; Kim, Soon Young; Shin, Chang Ho; Kim, Jong Kyung

    2007-01-01

    EASYQAD had been previously developed by using MATLAB GUI (Graphical User Interface) in order to perform conveniently gamma and neutron shielding calculations at Hanyang University. It had been completed as version α of radiation shielding analysis code. In this study, EASYQAD was upgraded to version β with many additional functions and more user-friendly graphical interfaces. For general users to run it on Windows XP environment without any MATLAB installation, this version was developed into a standalone code system

  16. Local stabilizer codes in three dimensions without string logical operators

    International Nuclear Information System (INIS)

    Haah, Jeongwan

    2011-01-01

    We suggest concrete models for self-correcting quantum memory by reporting examples of local stabilizer codes in 3D that have no string logical operators. Previously known local stabilizer codes in 3D all have stringlike logical operators, which make the codes non-self-correcting. We introduce a notion of ''logical string segments'' to avoid difficulties in defining one-dimensional objects in discrete lattices. We prove that every stringlike logical operator of our code can be deformed to a disjoint union of short segments, each of which is in the stabilizer group. The code has surfacelike logical operators whose partial implementation has unsatisfied stabilizers along its boundary.

  17. A proto-code of ethics and conduct for European nurse directors.

    Science.gov (United States)

    Stievano, Alessandro; De Marinis, Maria Grazia; Kelly, Denise; Filkins, Jacqueline; Meyenburg-Altwarg, Iris; Petrangeli, Mauro; Tschudin, Verena

    2012-03-01

    The proto-code of ethics and conduct for European nurse directors was developed as a strategic and dynamic document for nurse managers in Europe. It invites critical dialogue, reflective thinking about different situations, and the development of specific codes of ethics and conduct by nursing associations in different countries. The term proto-code is used for this document so that specifically country-orientated or organization-based and practical codes can be developed from it to guide professionals in more particular or situation-explicit reflection and values. The proto-code of ethics and conduct for European nurse directors was designed and developed by the European Nurse Directors Association's (ENDA) advisory team. This article gives short explanations of the code' s preamble and two main parts: Nurse directors' ethical basis, and Principles of professional practice, which is divided into six specific points: competence, care, safety, staff, life-long learning and multi-sectorial working.

  18. Analyzing stakeholders' workshop dialogue for evidence of social learning

    Directory of Open Access Journals (Sweden)

    Amanda L. Bentley Brymer

    2018-03-01

    Full Text Available After much debate and synthesis, social learning scholarship is entering an era of empirical research. Given the range across individual-, network-, and systems-level perspectives and scales, clear documentation of social learning processes is critical for making claims about social learning outcomes and their impacts. Past studies have relied on participant recall and concept maps to document perceptions of social learning process and outcome. Using an individual-centric perspective and importing ideas from communication and psychology on question-answer learning through conversational agents, we contribute an expanded conceptual framework and qualitative analytical strategy for assessing stakeholder dialogue for evidence of social learning. We observed stakeholder dialogue across five workshops coordinated for the Bruneau-Owyhee Sage-Grouse Habitat Project (BOSH in Owyhee County, Idaho, USA. Participants' dialogue was audio recorded, transcribed, and analyzed for cross-case patterns. Deductive and inductive coding techniques were applied to illuminate cognitive, relational, and epistemic dimensions of learning and topics of learning. A key finding supports our inclusion of the epistemic dimension and highlights a need for future research: although some participants articulated epistemic positions, they did not challenge each other to share sources or justify factual claims. These findings align with previous research suggesting that, in addition to considering diversity and representation (who is at the table, we should pay more attention to how participants talk, perhaps prompting specific patterns of speech as we endeavor to draw causal connections between social learning processes and outcomes.

  19. A database of linear codes over F_13 with minimum distance bounds and new quasi-twisted codes from a heuristic search algorithm

    Directory of Open Access Journals (Sweden)

    Eric Z. Chen

    2015-01-01

    Full Text Available Error control codes have been widely used in data communications and storage systems. One central problem in coding theory is to optimize the parameters of a linear code and construct codes with best possible parameters. There are tables of best-known linear codes over finite fields of sizes up to 9. Recently, there has been a growing interest in codes over $\\mathbb{F}_{13}$ and other fields of size greater than 9. The main purpose of this work is to present a database of best-known linear codes over the field $\\mathbb{F}_{13}$ together with upper bounds on the minimum distances. To find good linear codes to establish lower bounds on minimum distances, an iterative heuristic computer search algorithm is employed to construct quasi-twisted (QT codes over the field $\\mathbb{F}_{13}$ with high minimum distances. A large number of new linear codes have been found, improving previously best-known results. Tables of $[pm, m]$ QT codes over $\\mathbb{F}_{13}$ with best-known minimum distances as well as a table of lower and upper bounds on the minimum distances for linear codes of length up to 150 and dimension up to 6 are presented.

  20. Teachers’ Learning Design Practice for Students as Learning Designers

    DEFF Research Database (Denmark)

    Levinsen, Karin Tweddell; Sørensen, Birgitte Holm

    2018-01-01

    This paper contributes with elements of an emerging learning design methodology. The paper takes as its starting point the theory of Students as Learning Designers, which was developed by Sørensen and Levinsen and based on more than a decade of research-and-development projects in Danish primary...... schools (first to 10th grade). The research focussed on information and communication technology (ICT) within the Scandinavian tradition of Problem Oriented Project Pedagogy (POPP), Problem Based Learning (PBL) and students’ production. In recent years, the projects that provide the grounding...... for the theory have focussed specifically on learning designs that constitute students as learning designers of digital productions (both multimodal and coded productions). This includes learning designs that contribute to students’ empowerment, involvement and autonomy within the teacher-designed frameworks...

  1. A comprehensive study of sparse codes on abnormality detection

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2017-01-01

    Sparse representation has been applied successfully in abnor-mal event detection, in which the baseline is to learn a dic-tionary accompanied by sparse codes. While much empha-sis is put on discriminative dictionary construction, there areno comparative studies of sparse codes regarding abnormal-ity...... detection. We comprehensively study two types of sparsecodes solutions - greedy algorithms and convex L1-norm so-lutions - and their impact on abnormality detection perfor-mance. We also propose our framework of combining sparsecodes with different detection methods. Our comparative ex-periments are carried...

  2. Game-Coding Workshops in New Zealand Public Libraries: Evaluation of a Pilot Project

    Science.gov (United States)

    Bolstad, Rachel

    2016-01-01

    This report evaluates a game coding workshop offered to young people and adults in seven public libraries round New Zealand. Participants were taken step by step through the process of creating their own simple 2D videogame, learning the basics of coding, computational thinking, and digital game design. The workshops were free and drew 426 people…

  3. Final Technical Report: Hydrogen Codes and Standards Outreach

    Energy Technology Data Exchange (ETDEWEB)

    Hall, Karen I.

    2007-05-12

    This project contributed significantly to the development of new codes and standards, both domestically and internationally. The NHA collaborated with codes and standards development organizations to identify technical areas of expertise that would be required to produce the codes and standards that industry and DOE felt were required to facilitate commercialization of hydrogen and fuel cell technologies and infrastructure. NHA staff participated directly in technical committees and working groups where issues could be discussed with the appropriate industry groups. In other cases, the NHA recommended specific industry experts to serve on technical committees and working groups where the need for this specific industry expertise would be on-going, and where this approach was likely to contribute to timely completion of the effort. The project also facilitated dialog between codes and standards development organizations, hydrogen and fuel cell experts, the government and national labs, researchers, code officials, industry associations, as well as the public regarding the timeframes for needed codes and standards, industry consensus on technical issues, procedures for implementing changes, and general principles of hydrogen safety. The project facilitated hands-on learning, as participants in several NHA workshops and technical meetings were able to experience hydrogen vehicles, witness hydrogen refueling demonstrations, see metal hydride storage cartridges in operation, and view other hydrogen energy products.

  4. Discovery of Proteomic Code with mRNA Assisted Protein Folding

    Directory of Open Access Journals (Sweden)

    Jan C. Biro

    2008-12-01

    Full Text Available The 3x redundancy of the Genetic Code is usually explained as a necessity to increase the mutation-resistance of the genetic information. However recent bioinformatical observations indicate that the redundant Genetic Code contains more biological information than previously known and which is additional to the 64/20 definition of amino acids. It might define the physico-chemical and structural properties of amino acids, the codon boundaries, the amino acid co-locations (interactions in the coded proteins and the free folding energy of mRNAs. This additional information, which seems to be necessary to determine the 3D structure of coding nucleic acids as well as the coded proteins, is known as the Proteomic Code and mRNA Assisted Protein Folding.

  5. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  6. The "periodic table" of the genetic code: A new way to look at the code and the decoding process.

    Science.gov (United States)

    Komar, Anton A

    2016-01-01

    Henri Grosjean and Eric Westhof recently presented an information-rich, alternative view of the genetic code, which takes into account current knowledge of the decoding process, including the complex nature of interactions between mRNA, tRNA and rRNA that take place during protein synthesis on the ribosome, and it also better reflects the evolution of the code. The new asymmetrical circular genetic code has a number of advantages over the traditional codon table and the previous circular diagrams (with a symmetrical/clockwise arrangement of the U, C, A, G bases). Most importantly, all sequence co-variances can be visualized and explained based on the internal logic of the thermodynamics of codon-anticodon interactions.

  7. A Case for Dynamic Reverse-code Generation to Debug Non-deterministic Programs

    Directory of Open Access Journals (Sweden)

    Jooyong Yi

    2013-09-01

    Full Text Available Backtracking (i.e., reverse execution helps the user of a debugger to naturally think backwards along the execution path of a program, and thinking backwards makes it easy to locate the origin of a bug. So far backtracking has been implemented mostly by state saving or by checkpointing. These implementations, however, inherently do not scale. Meanwhile, a more recent backtracking method based on reverse-code generation seems promising because executing reverse code can restore the previous states of a program without state saving. In the literature, there can be found two methods that generate reverse code: (a static reverse-code generation that pre-generates reverse code through static analysis before starting a debugging session, and (b dynamic reverse-code generation that generates reverse code by applying dynamic analysis on the fly during a debugging session. In particular, we espoused the latter one in our previous work to accommodate non-determinism of a program caused by e.g., multi-threading. To demonstrate the usefulness of our dynamic reverse-code generation, this article presents a case study of various backtracking methods including ours. We compare the memory usage of various backtracking methods in a simple but nontrivial example, a bounded-buffer program. In the case of non-deterministic programs such as this bounded-buffer program, our dynamic reverse-code generation outperforms the existing backtracking methods in terms of memory efficiency.

  8. Python for probability, statistics, and machine learning

    CERN Document Server

    Unpingco, José

    2016-01-01

    This book covers the key ideas that link probability, statistics, and machine learning illustrated using Python modules in these areas. The entire text, including all the figures and numerical results, is reproducible using the Python codes and their associated Jupyter/IPython notebooks, which are provided as supplementary downloads. The author develops key intuitions in machine learning by working meaningful examples using multiple analytical methods and Python codes, thereby connecting theoretical concepts to concrete implementations. Modern Python modules like Pandas, Sympy, and Scikit-learn are applied to simulate and visualize important machine learning concepts like the bias/variance trade-off, cross-validation, and regularization. Many abstract mathematical ideas, such as convergence in probability theory, are developed and illustrated with numerical examples. This book is suitable for anyone with an undergraduate-level exposure to probability, statistics, or machine learning and with rudimentary knowl...

  9. BAR-MOM code and its application

    International Nuclear Information System (INIS)

    Wang Shunuan

    2002-01-01

    BAR-MOM code for calculating the height of the fission barrier Bf , the energy of the ground state is presented; the compound nucleus stability by limit with respect to fission, i.e., the angular momentum (the spin value) L max at which the fission barrier disappears, the three principal axis moments of inertia at saddle point for a certain nucleus with atomic number Z, atomic mass number A and angular momentum L in units of ℎ for 19< Z<102, and the model used are introduced briefly. The generalized BAR-MOM code to include the results for Z ≥ 102 by using more recent parameterization of the Thomas Fermi fission barrier is also introduced briefly. We have learned the models used in Code BAR-MOM, and run it successfully and correctly for a certain nucleus with atomic mass number A, atomic number Z, and angular momentum L on PC by Fortran-90. The testing calculation values to check the implementation of the program show that the results of the present work are in good agreement with the original one

  10. Continuous Non-malleable Codes

    DEFF Research Database (Denmark)

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non...... is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. We construct a split-state code satisfying continuous non-malleability. Our scheme...... is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous...

  11. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  12. Association of Amine-Receptor DNA Sequence Variants with Associative Learning in the Honeybee.

    Science.gov (United States)

    Lagisz, Malgorzata; Mercer, Alison R; de Mouzon, Charlotte; Santos, Luana L S; Nakagawa, Shinichi

    2016-03-01

    Octopamine- and dopamine-based neuromodulatory systems play a critical role in learning and learning-related behaviour in insects. To further our understanding of these systems and resulting phenotypes, we quantified DNA sequence variations at six loci coding octopamine-and dopamine-receptors and their association with aversive and appetitive learning traits in a population of honeybees. We identified 79 polymorphic sequence markers (mostly SNPs and a few insertions/deletions) located within or close to six candidate genes. Intriguingly, we found that levels of sequence variation in the protein-coding regions studied were low, indicating that sequence variation in the coding regions of receptor genes critical to learning and memory is strongly selected against. Non-coding and upstream regions of the same genes, however, were less conserved and sequence variations in these regions were weakly associated with between-individual differences in learning-related traits. While these associations do not directly imply a specific molecular mechanism, they suggest that the cross-talk between dopamine and octopamine signalling pathways may influence olfactory learning and memory in the honeybee.

  13. GAPCON-THERMAL-3 code description

    International Nuclear Information System (INIS)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes

  14. GAPCON-THERMAL-3 code description

    Energy Technology Data Exchange (ETDEWEB)

    Lanning, D.D.; Mohr, C.L.; Panisko, F.E.; Stewart, K.B.

    1978-01-01

    GAPCON-3 is a computer program that predicts the thermal and mechanical behavior of an operating fuel rod during its normal lifetime. The code calculates temperatures, dimensions, stresses, and strains for the fuel and the cladding in both the radial and axial directions for each step of the user specified power history. The method of weighted residuals is for the steady state temperature calculation, and is combined with a finite difference approximation of the time derivative for transient conditions. The stress strain analysis employs an iterative axisymmetric finite element procedure that includes plasticity and creep for normal and pellet-clad mechanical interaction loads. GAPCON-3 can solve steady state and operational transient problems. Comparisons of GAPCON-3 predictions to both closed form analytical solutions and actual inpile instrumented fuel rod data have demonstrated the ability of the code to calculate fuel rod behavior. GAPCON-3 features a restart capability and an associated plot package unavailable in previous GAPCON series codes.

  15. Hello Ruby adventures in coding

    CERN Document Server

    Liukas, Linda

    2015-01-01

    "Code is the 21st century literacy and the need for people to speak the ABCs of Programming is imminent." --Linda Liukas Meet Ruby--a small girl with a huge imagination. In Ruby's world anything is possible if you put your mind to it. When her dad asks her to find five hidden gems Ruby is determined to solve the puzzle with the help of her new friends, including the Wise Snow Leopard, the Friendly Foxes, and the Messy Robots. As Ruby stomps around her world kids will be introduced to the basic concepts behind coding and programming through storytelling. Learn how to break big problems into small problems, repeat tasks, look for patterns, create step-by-step plans, and think outside the box. With hands-on activities included in every chapter, future coders will be thrilled to put their own imaginations to work.

  16. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  17. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  18. Investigation of coupling scheme for neutronic and thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Wang Guoli; Yu Jianfeng; Pen Muzhang; Zhang Yuman.

    1988-01-01

    Recently, a number of coupled neutronics/thermal-hydraulics codes have been used in reaction design and safty analysis, which have been obtained by coupling previous neutronic and thermal-hydraulic codes. The different coupling schemes affect computer time and accuracy of calculation results. Numberical experiments of several different coupling schemes and some heuristic results are described

  19. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  20. Politicas de uniformes y codigos de vestuario (Uniforms and Dress-Code Policies). ERIC Digest.

    Science.gov (United States)

    Lumsden, Linda

    This digest in Spanish examines schools' dress-code policies and discusses the legal considerations and research findings about the effects of such changes. Most revisions to dress codes involve the use of uniforms, typically as a way to curb school violence and create a positive learning environment. A recent survey of secondary school principals…

  1. Teaching Billing and Coding to Medical Students: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Jiaxin Tran

    2013-08-01

    Full Text Available Complex billing practices cost the US healthcare system billions of dollars annually. Coding for outpatient office visits [known as Evaluation & Management (E&M services] is commonly particularly fraught with errors. The best way to insure proper billing and coding by practicing physicians is to teach this as part of the medical school curriculum. Here, in a pilot study, we show that medical students can learn well the basic principles from lectures. This approach is easy to implement into a medical school curriculum.

  2. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  3. Libre knowledge, libre learning and global development

    CSIR Research Space (South Africa)

    Tucker, KC

    2006-10-01

    Full Text Available Rishab Ghosh     48 Findings: formal learning In comparison with formal ICT courses: ● FLOSS provides a better, practical learning  environment for many technical skills: – Writing re­usable code & debugging – Working... run and maintain complex software systems Basic / introductory programming skills To look for and fix bugs To become familiar with different programming languages To write code in a way that it can be re­ used To design modular code To document code To create new algorithms a...

  4. Conceptual aspects: analyses law, ethical, human, technical, social factors of development ICT, e-learning and intercultural development in different countries setting out the previous new theoretical model and preliminary findings

    NARCIS (Netherlands)

    Kommers, Petrus A.M.; Smyrnova-Trybulska, Eugenia; Morze, Natalia; Issa, Tomayess; Issa, Theodora

    2015-01-01

    This paper, prepared by an international team of authors focuses on the conceptual aspects: analyses law, ethical, human, technical, social factors of ICT development, e-learning and intercultural development in different countries, setting out the previous and new theoretical model and preliminary

  5. High-Fidelity Coding with Correlated Neurons

    Science.gov (United States)

    da Silveira, Rava Azeredo; Berry, Michael J.

    2014-01-01

    Positive correlations in the activity of neurons are widely observed in the brain. Previous studies have shown these correlations to be detrimental to the fidelity of population codes, or at best marginally favorable compared to independent codes. Here, we show that positive correlations can enhance coding performance by astronomical factors. Specifically, the probability of discrimination error can be suppressed by many orders of magnitude. Likewise, the number of stimuli encoded—the capacity—can be enhanced more than tenfold. These effects do not necessitate unrealistic correlation values, and can occur for populations with a few tens of neurons. We further show that both effects benefit from heterogeneity commonly seen in population activity. Error suppression and capacity enhancement rest upon a pattern of correlation. Tuning of one or several effective parameters can yield a limit of perfect coding: the corresponding pattern of positive correlation leads to a ‘lock-in’ of response probabilities that eliminates variability in the subspace relevant for stimulus discrimination. We discuss the nature of this pattern and we suggest experimental tests to identify it. PMID:25412463

  6. Mathematics and Science Learning Opportunities in Preschool Classrooms

    Science.gov (United States)

    Piasta, Shayne B.; Pelatti, Christina Yeager; Miller, Heather Lynnine

    2014-01-01

    Research findings The present study observed and coded instruction in 65 preschool classrooms to examine (a) overall amounts and (b) types of mathematics and science learning opportunities experienced by preschool children as well as (c) the extent to which these opportunities were associated with classroom and program characteristics. Results indicated that children were afforded an average of 24 and 26 minutes of mathematics and science learning opportunities, respectively, corresponding to spending approximately 25% of total instructional time in each domain. Considerable variability existed, however, in the amounts and types of mathematics and science opportunities provided to children in their classrooms; to some extent, this variability was associated with teachers’ years of experience, teachers’ levels of education, and the socioeconomic status of children served in the program. Practice/policy Although results suggest greater integration of mathematics and science in preschool classrooms than previously established, there was considerable diversity in the amounts and types of learning opportunities provided in preschool classrooms. Affording mathematics and science experiences to all preschool children, as outlined in professional and state standards, may require additional professional development aimed at increasing preschool teachers’ understanding and implementation of learning opportunities in these two domains in their classrooms. PMID:25489205

  7. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  8. Using narrative-based design scaffolds within a mobile learning environment to support learning outdoors with young children

    Science.gov (United States)

    Seely, Brian J.

    This study aims to advance learning outdoors with mobile devices. As part of the ongoing Tree Investigators design-based research study, this research investigated a mobile application to support observation, identification, and explanation of the tree life cycle within an authentic, outdoor setting. Recognizing the scientific and conceptual complexity of this topic for young children, the design incorporated technological and design scaffolds within a narrative-based learning environment. In an effort to support learning, 14 participants (aged 5-9) were guided through the mobile app on tree life cycles by a comic-strip pedagogical agent, "Nutty the Squirrel", as they looked to explore and understand through guided observational practices and artifact creation tasks. In comparison to previous iterations of this DBR study, the overall patterns of talk found in this study were similar, with perceptual and conceptual talk being the first and second most frequently coded categories, respectively. However, this study coded considerably more instances of affective talk. This finding of the higher frequency of affective talk could possibly be explained by the relatively younger age of this iteration's participants, in conjunction with the introduced pedagogical agent, who elicited playfulness and delight from the children. The results also indicated a significant improvement when comparing the pretest results (mean score of .86) with the posttest results (mean score of 4.07, out of 5). Learners were not only able to recall the phases of a tree life cycle, but list them in the correct order. The comparison reports a significant increase, showing evidence of increased knowledge and appropriation of scientific vocabulary. The finding suggests the narrative was effective in structuring the complex material into a story for sense making. Future research with narratives should consider a design to promote learner agency through more interactions with the pedagogical agent and a

  9. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    Science.gov (United States)

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  10. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  11. nRC: non-coding RNA Classifier based on structural features.

    Science.gov (United States)

    Fiannaca, Antonino; La Rosa, Massimo; La Paglia, Laura; Rizzo, Riccardo; Urso, Alfonso

    2017-01-01

    Non-coding RNA (ncRNA) are small non-coding sequences involved in gene expression regulation of many biological processes and diseases. The recent discovery of a large set of different ncRNAs with biologically relevant roles has opened the way to develop methods able to discriminate between the different ncRNA classes. Moreover, the lack of knowledge about the complete mechanisms in regulative processes, together with the development of high-throughput technologies, has required the help of bioinformatics tools in addressing biologists and clinicians with a deeper comprehension of the functional roles of ncRNAs. In this work, we introduce a new ncRNA classification tool, nRC (non-coding RNA Classifier). Our approach is based on features extraction from the ncRNA secondary structure together with a supervised classification algorithm implementing a deep learning architecture based on convolutional neural networks. We tested our approach for the classification of 13 different ncRNA classes. We obtained classification scores, using the most common statistical measures. In particular, we reach an accuracy and sensitivity score of about 74%. The proposed method outperforms other similar classification methods based on secondary structure features and machine learning algorithms, including the RNAcon tool that, to date, is the reference classifier. nRC tool is freely available as a docker image at https://hub.docker.com/r/tblab/nrc/. The source code of nRC tool is also available at https://github.com/IcarPA-TBlab/nrc.

  12. RunJumpCode: An Educational Game for Educating Programming

    Science.gov (United States)

    Hinds, Matthew; Baghaei, Nilufar; Ragon, Pedrito; Lambert, Jonathon; Rajakaruna, Tharindu; Houghton, Travers; Dacey, Simon

    2017-01-01

    Programming promotes critical thinking, problem solving and analytic skills through creating solutions that can solve everyday problems. However, learning programming can be a daunting experience for a lot of students. "RunJumpCode" is an educational 2D platformer video game, designed and developed in Unity, to teach players the…

  13. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  14. Game-Based Learning Theory

    Science.gov (United States)

    Laughlin, Daniel

    2008-01-01

    Persistent Immersive Synthetic Environments (PISE) are not just connection points, they are meeting places. They are the new public squares, village centers, malt shops, malls and pubs all rolled into one. They come with a sense of 'thereness" that engages the mind like a real place does. Learning starts as a real code. The code defines "objects." The objects exist in computer space, known as the "grid." The objects and space combine to create a "place." A "world" is created, Before long, the grid and code becomes obscure, and the "world maintains focus.

  15. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  16. Game E-Learning Code Master Dengan Konsep Mmorpg Menggunakan Adobe Flex 3

    OpenAIRE

    Fredy Purnomo; Monika Leslivania; Daniel Daniel; Lisye Mareta Cahya

    2010-01-01

    The research objective is to design a web-based e-learning game that could be used to be a learning facility of C language programming and as an online game so it could be enjoyed by everybody easily in internet. Flex usage in this game online is to implement RIA (Rich Internet Application) concept in the game so e-learning process is hoped to be more interesting and interactive. E-learning game is also designed in MMORPG (Massively Multiplayer Online Role Playing Game) concept. The research ...

  17. It takes two—coincidence coding within the dual olfactory pathway of the honeybee

    OpenAIRE

    Brill, Martin F.; Meyer, Anneke; Rössler, Wolfgang

    2015-01-01

    To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g., in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and...

  18. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    Science.gov (United States)

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  19. Probabilistic Amplitude Shaping With Hard Decision Decoding and Staircase Codes

    Science.gov (United States)

    Sheikh, Alireza; Amat, Alexandre Graell i.; Liva, Gianluigi; Steiner, Fabian

    2018-05-01

    We consider probabilistic amplitude shaping (PAS) as a means of increasing the spectral efficiency of fiber-optic communication systems. In contrast to previous works in the literature, we consider probabilistic shaping with hard decision decoding (HDD). In particular, we apply the PAS recently introduced by B\\"ocherer \\emph{et al.} to a coded modulation (CM) scheme with bit-wise HDD that uses a staircase code as the forward error correction code. We show that the CM scheme with PAS and staircase codes yields significant gains in spectral efficiency with respect to the baseline scheme using a staircase code and a standard constellation with uniformly distributed signal points. Using a single staircase code, the proposed scheme achieves performance within $0.57$--$1.44$ dB of the corresponding achievable information rate for a wide range of spectral efficiencies.

  20. Particle tracing code for multispecies gas

    International Nuclear Information System (INIS)

    Eaton, R.R.; Fox, R.L.; Vandevender, W.H.

    1979-06-01

    Details are presented for the development of a computer code designed to calculate the flow of a multispecies gas mixture using particle tracing techniques. The current technique eliminates the need for a full simulation by utilizing local time averaged velocity distribution functions to obtain the dynamic properties for probable collision partners. The development of this concept reduces statistical scatter experienced in conventional Monte Carlo simulations. The technique is applicable to flow problems involving gas mixtures with disparate masses and trace constituents in the Knudsen number, Kn, range from 1.0 to less than 0.01. The resulting code has previously been used to analyze several aerodynamic isotope enrichment devices

  1. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  2. DLVM: A modern compiler infrastructure for deep learning systems

    OpenAIRE

    Wei, Richard; Schwartz, Lane; Adve, Vikram

    2017-01-01

    Deep learning software demands reliability and performance. However, many of the existing deep learning frameworks are software libraries that act as an unsafe DSL in Python and a computation graph interpreter. We present DLVM, a design and implementation of a compiler infrastructure with a linear algebra intermediate representation, algorithmic differentiation by adjoint code generation, domain-specific optimizations and a code generator targeting GPU via LLVM. Designed as a modern compiler ...

  3. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  4. An Efficient Platform for the Automatic Extraction of Patterns in Native Code

    Directory of Open Access Journals (Sweden)

    Javier Escalada

    2017-01-01

    Full Text Available Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.

  5. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.

    Science.gov (United States)

    Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J

    2016-01-27

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes

  6. Conflict free network coding for distributed storage networks

    KAUST Repository

    Al-Habob, Ahmed A.; Sorour, Sameh; Aboutorab, Neda; Sadeghi, Parastoo

    2015-01-01

    © 2015 IEEE. In this paper, we design a conflict free instantly decodable network coding (IDNC) solution for file download from distributed storage servers. Considering previously downloaded files at the clients from these servers as side

  7. Learning "While" Working: Success Stories on Workplace Learning in Europe

    Science.gov (United States)

    Lardinois, Rocio

    2011-01-01

    Cedefop's report "Learning while working: success stories on workplace learning in Europe" presents an overview of key trends in adult learning in the workplace. It takes stock of previous research carried out by Cedefop between 2003 and 2010 on key topics for adult learning: governance and the learning regions; social partner roles in…

  8. Lessons learned in the verification, validation and application of a coupled heat and fluid flow code

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1986-01-01

    A summary is given of the authors recent studies in the verification, validation and application of a coupled heat and fluid flow code. Verification has been done against eight analytic and semi-analytic solutions. These solutions include those involving thermal buoyancy flow and fracture flow. Comprehensive field validation studies over a period of four years are discussed. The studies are divided into three stages: (1) history matching, (2) double-blind prediction and confirmation, (3) design optimization. At each stage, parameter sensitivity studies are performed. To study the applications of mathematical models, a problem proposed by the International Energy Agency (IEA) is solved using this verified and validated numerical model as well as two simpler models. One of the simpler models is a semi-analytic method assuming the uncoupling of the heat and fluid flow processes. The other is a graphical method based on a large number of approximations. Variations are added to the basic IEA problem to point out the limits of ranges of applications of each model. A number of lessons are learned from the above investigations. These are listed and discussed

  9. Who Deserves My Trust? Cue-Elicited Feedback Negativity Tracks Reputation Learning in Repeated Social Interactions.

    Science.gov (United States)

    Li, Diandian; Meng, Liang; Ma, Qingguo

    2017-01-01

    Trust and trustworthiness contribute to reciprocal behavior and social relationship development. To make better decisions, people need to evaluate others' trustworthiness. They often assess this kind of reputation by learning through repeated social interactions. The present event-related potential (ERP) study explored the reputation learning process in a repeated trust game where subjects made multi-round decisions of investment to different partners. We found that subjects gradually learned to discriminate trustworthy partners from untrustworthy ones based on how often their partners reciprocated the investment, which was indicated by their own investment decisions. Besides, electrophysiological data showed that the faces of the untrustworthy partners induced larger feedback negativity (FN) amplitude than those of the trustworthy partners, but only in the late phase of the game. The ERP results corresponded with the behavioral pattern and revealed that the learned trustworthiness differentiation was coded by the cue-elicited FN component. Consistent with previous research, our findings suggest that the anterior cue-elicited FN reflects the reputation appraisal and tracks the reputation learning process in social interactions.

  10. Who Deserves My Trust? Cue-Elicited Feedback Negativity Tracks Reputation Learning in Repeated Social Interactions

    Directory of Open Access Journals (Sweden)

    Diandian Li

    2017-06-01

    Full Text Available Trust and trustworthiness contribute to reciprocal behavior and social relationship development. To make better decisions, people need to evaluate others’ trustworthiness. They often assess this kind of reputation by learning through repeated social interactions. The present event-related potential (ERP study explored the reputation learning process in a repeated trust game where subjects made multi-round decisions of investment to different partners. We found that subjects gradually learned to discriminate trustworthy partners from untrustworthy ones based on how often their partners reciprocated the investment, which was indicated by their own investment decisions. Besides, electrophysiological data showed that the faces of the untrustworthy partners induced larger feedback negativity (FN amplitude than those of the trustworthy partners, but only in the late phase of the game. The ERP results corresponded with the behavioral pattern and revealed that the learned trustworthiness differentiation was coded by the cue-elicited FN component. Consistent with previous research, our findings suggest that the anterior cue-elicited FN reflects the reputation appraisal and tracks the reputation learning process in social interactions.

  11. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  12. TERRESTRIAL LASER SCANNER DATA DENOISING BY DICTIONARY LEARNING OF SPARSE CODING

    Directory of Open Access Journals (Sweden)

    E. Smigiel

    2013-07-01

    Full Text Available Point cloud processing is basically a signal processing issue. The huge amount of data which are collected with Terrestrial Laser Scanners or photogrammetry techniques faces the classical questions linked with signal or image processing. Among others, denoising and compression are questions which have to be addressed in this context. That is why, one has to turn attention to signal theory because it is susceptible to guide one's good practices or to inspire new ideas from the latest developments of this field. The literature have been showing for decades how strong and dynamic, the theoretical field is and how efficient the derived algorithms have become. For about ten years, a new technique has appeared: known as compressive sensing or compressive sampling, it is based first on sparsity which is an interesting characteristic of many natural signals. Based on this concept, many denoising and compression techniques have shown their efficiencies. Sparsity can also be seen as redundancy removal of natural signals. Taken along with incoherent measurements, compressive sensing has appeared and uses the idea that redundancy could be removed at the very early stage of sampling. Hence, instead of sampling the signal at high sampling rate and removing redundancy as a second stage, the acquisition stage itself may be run with redundancy removal. This paper gives some theoretical aspects of these ideas with first simple mathematics. Then, the idea of compressive sensing for a Terrestrial Laser Scanner is examined as a potential research question and finally, a denoising scheme based on a dictionary learning of sparse coding is experienced. Both the theoretical discussion and the obtained results show that it is worth staying close to signal processing theory and its community to take benefit of its latest developments.

  13. 1995 building energy codes and standards workshops: Summary and documentation

    Energy Technology Data Exchange (ETDEWEB)

    Sandahl, L.J.; Shankle, D.L.

    1996-02-01

    During the spring of 1995, Pacific Northwest National Laboratory (PNNL) conducted four two-day Regional Building Energy Codes and Standards workshops across the US. Workshops were held in Chicago, Denver, Rhode Island, and Atlanta. The workshops were designed to benefit state-level officials including staff of building code commissions, energy offices, public utility commissions, and others involved with adopting/updating, implementing, and enforcing building energy codes in their states. The workshops provided an opportunity for state and other officials to learn more about residential and commercial building energy codes and standards, the role of the US Department of Energy and the Building Standards and Guidelines Program at Pacific Northwest National Laboratory, Home Energy Rating Systems (HERS), Energy Efficient Mortgages (EEM), training issues, and other topics related to the development, adoption, implementation, and enforcement of building energy codes. Participants heard success stories, got tips on enforcement training, and received technical support materials. In addition to receiving information on the above topics, workshop participants had an opportunity to provide input on code adoption issues, building industry training issues, building design issues, and exemplary programs across the US. This paper documents the workshop planning, findings, and follow-up processes.

  14. RITA, a promising Monte Carlo code for recoil implantation

    International Nuclear Information System (INIS)

    Desalvo, A.; Rosa, R.

    1982-01-01

    A computer code previously set up to simulate ion penetration in amorphous solids has been extended to handle with recoil phenomena. Preliminary results are compared with existing experimental data. (author)

  15. Exploring the factors influencing clinical students' self-regulated learning.

    Science.gov (United States)

    Berkhout, Joris J; Helmich, Esther; Teunissen, Pim W; van den Berg, Joost W; van der Vleuten, Cees P M; Jaarsma, A Debbie C

    2015-06-01

    The importance of self-regulated learning (SRL) has been broadly recognised by medical education institutions and regulatory bodies. Supporting the development of SRL skills has proven difficult because self-regulation is a complex interactive process and we know relatively little about the factors influencing this process in real practice settings. The aim of our study was therefore to identify factors that support or hamper medical students' SRL in a clinical context. We conducted a constructivist grounded theory study using semi-structured interviews with 17 medical students from two universities enrolled in clerkships. Participants were purposively sampled to ensure variety in age, gender, experience and current clerkship. The Day Reconstruction Method was used to help participants remember their activities of the previous day. The interviews were transcribed verbatim and analysed iteratively using constant comparison and open, axial and interpretive coding. Self-regulated learning by students in the clinical environment was influenced by the specific goals perceived by students, the autonomy they experienced, the learning opportunities they were given or created themselves, and the anticipated outcomes of an activity. All of these factors were affected by personal, contextual and social attributes. Self-regulated learning of medical students in the clinical environment is different for every individual. The factors influencing this process are affected by personal, social and contextual attributes. Some of these are similar to those known from previous research in classroom settings, but others are unique to the clinical environment and include the facilities available, the role of patients, and social relationships pertaining to peers and other hospital staff. To better support students' SRL, we believe it is important to increase students' metacognitive awareness and to offer students more tailored learning opportunities. © 2015 John Wiley & Sons Ltd.

  16. Learning Illustrated: An Exploratory Cross-Sectional Drawing Analysis of Students' Conceptions of Learning

    Science.gov (United States)

    Hsieh, Wen-Min; Tsai, Chin-Chung

    2018-01-01

    Using the draw-a-picture technique, the authors explored the learning conceptions held by students across grade levels. A total of 1,067 Taiwanese students in Grades 2, 4, 6, 8, 10, and 12 participated in this study. Participants were asked to use drawing to illustrate how they conceptualize learning. A coding checklist was developed to analyze…

  17. Individual Learning Accounts: A Strategy for Lifelong Learning?

    Science.gov (United States)

    Renkema, Albert

    2006-01-01

    Purpose: Since the end of the previous century social partners in different branches of industry have laid down measures to stimulate individual learning and competence development of workers in collective labour agreements. Special attention is given to stimulating learning demand among traditional non-participants to lifelong learning, such as…

  18. Effect of Inter-sentential vs Intra-sentential Code-Switching: With a Focus on Past Tense

    Directory of Open Access Journals (Sweden)

    Hanieh Kashi

    2018-03-01

    Full Text Available The current study aimed at the comparative effect of inter-sentential vs intra-sentential code-switching on learning past tense. Initially, through non-random convenient sampling, the researcher chose 90 female EFL learners at the elementary level. Next, Key English Test (KET was administered to the 90 learners and the results were used to select 60 participants for the purpose of this study. The participants were then divided into two groups each consisting of 30 learners. Afterwards, a grammar pretest having 30 items focusing on past simple tense was given to both groups. Following that, the grammatical explanations were provided for the two groups for ten sessions using code-switching.  The first experimental group received inter-sentential code switching in line with Reyes’s (2004 as a switch between two languages, where a sentence in one of the languages is completed and the next sentence starts with the other language (Reyes, 2004. In the second experimental group, in line with Reyes’s (2004, the switching occurred within a sentence. The results of statistical analysis indicated that inter-sentential code-switching proved more effective compared to intra-sentential code-switching on the learning of past tense by EFL learners. Based on the findings of the present study, EFL teachers are encouraged to use inter-sentential code-switching more compared to intra-sentential code-switching when it comes to teaching grammar.

  19. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  20. Probabilistic hypergraph based hash codes for social image search

    Institute of Scientific and Technical Information of China (English)

    Yi XIE; Hui-min YU; Roland HU

    2014-01-01

    With the rapid development of the Internet, recent years have seen the explosive growth of social media. This brings great challenges in performing efficient and accurate image retrieval on a large scale. Recent work shows that using hashing methods to embed high-dimensional image features and tag information into Hamming space provides a powerful way to index large collections of social images. By learning hash codes through a spectral graph partitioning algorithm, spectral hashing (SH) has shown promising performance among various hashing approaches. However, it is incomplete to model the relations among images only by pairwise simple graphs which ignore the relationship in a higher order. In this paper, we utilize a probabilistic hypergraph model to learn hash codes for social image retrieval. A probabilistic hypergraph model offers a higher order repre-sentation among social images by connecting more than two images in one hyperedge. Unlike a normal hypergraph model, a probabilistic hypergraph model considers not only the grouping information, but also the similarities between vertices in hy-peredges. Experiments on Flickr image datasets verify the performance of our proposed approach.

  1. The Code of the Street and Violent Versus Property Crime Victimization.

    Science.gov (United States)

    McNeeley, Susan; Wilcox, Pamela

    2015-01-01

    Previous research has shown that individuals who adopt values in line with the code of the street are more likely to experience violent victimization (e.g., Stewart, Schreck, & Simons, 2006). This study extends this literature by examining the relationship between the street code and multiple types of violent and property victimization. This research investigates the relationship between street code-related values and 4 types of victimization (assault, breaking and entering, theft, and vandalism) using Poisson-based multilevel regression models. Belief in the street code was associated with higher risk of experiencing assault, breaking and entering, and vandalism, whereas theft victimization was not related to the street code. The results suggest that the code of the street influences victimization broadly--beyond violence--by increasing behavior that provokes retaliation from others in various forms.

  2. A Theoretical Analysis of Learning with Graphics--Implications for Computer Graphics Design.

    Science.gov (United States)

    ChanLin, Lih-Juan

    This paper reviews the literature pertinent to learning with graphics. The dual coding theory provides explanation about how graphics are stored and precessed in semantic memory. The level of processing theory suggests how graphics can be employed in learning to encourage deeper processing. In addition to dual coding theory and level of processing…

  3. A program for undergraduate research into the mechanisms of sensory coding and memory decay

    Energy Technology Data Exchange (ETDEWEB)

    Calin-Jageman, R J

    2010-09-28

    This is the final technical report for this DOE project, entitltled "A program for undergraduate research into the mechanisms of sensory coding and memory decay". The report summarizes progress on the three research aims: 1) to identify phyisological and genetic correlates of long-term habituation, 2) to understand mechanisms of olfactory coding, and 3) to foster a world-class undergraduate neuroscience program. Progress on the first aim has enabled comparison of learning-regulated transcripts across closely related learning paradigms and species, and results suggest that only a small core of transcripts serve truly general roles in long-term memory. Progress on the second aim has enabled testing of several mutant phenotypes for olfactory behaviors, and results show that responses are not fully consistent with the combinitoral coding hypothesis. Finally, 14 undergraduate students participated in this research, the neuroscience program attracted extramural funding, and we completed a successful summer program to enhance transitions for community-college students into 4-year colleges to persue STEM fields.

  4. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  5. Machine learning of the reactor core loading pattern critical parameters

    International Nuclear Information System (INIS)

    Trontl, K.; Pevec, D.; Smuc, T.

    2007-01-01

    The usual approach to loading pattern optimization involves high degree of engineering judgment, a set of heuristic rules, an optimization algorithm and a computer code used for evaluating proposed loading patterns. The speed of the optimization process is highly dependent on the computer code used for the evaluation. In this paper we investigate the applicability of a machine learning model which could be used for fast loading pattern evaluation. We employed a recently introduced machine learning technique, Support Vector Regression (SVR), which has a strong theoretical background in statistical learning theory. Superior empirical performance of the method has been reported on difficult regression problems in different fields of science and technology. SVR is a data driven, kernel based, nonlinear modelling paradigm, in which model parameters are automatically determined by solving a quadratic optimization problem. The main objective of the work reported in this paper was to evaluate the possibility of applying SVR method for reactor core loading pattern modelling. The starting set of experimental data for training and testing of the machine learning algorithm was obtained using a two-dimensional diffusion theory reactor physics computer code. We illustrate the performance of the solution and discuss its applicability, i.e., complexity, speed and accuracy, with a projection to a more realistic scenario involving machine learning from the results of more accurate and time consuming three-dimensional core modelling code. (author)

  6. Exploring the underlying factors influencing e-learning adoption in nurse education.

    Science.gov (United States)

    Petit dit Dariel, Odessa; Wharrad, Heather; Windle, Richard

    2013-06-01

    To report a study undertaken to explore the underlying factors influencing e-learning adoption in nurse education. Despite e-learning's high profile it has not been readily integrated into teaching practice in nurse education. Previous research has identified generic, cross-disciplinary factors but has left out 'soft' factors. The study adopted an exploratory descriptive design. Q-methodology was used to explore e-learning adoption in a Division of Nursing located in an institution of Higher Education in the UK. Between September-December 2009, 38 participants were recruited to participate in Q-sorts and post-sort interviews. The Q-sort data were factor analysed and the interviews were coded to their respective factors to develop in-depth narratives. Four factors were identified: 'E-learning advocates' saw e-learning's potential to improve nurse education and prepare future nurses for their evolving role; the 'Humanists' had avoided e-learning because they valued human interaction; the 'Sceptics' doubted that technology could improve learning outcomes; and the 'Pragmatics,' only used e-learning as a tool to post lecture notes online to supplement what they covered in class. The findings point to the variety of responses existing among nurse academics faced with integrating e-learning into their teaching. Moving beyond the binary labels commonly attributed to those considered either 'early adopters' or 'laggards,' the findings contribute to the literature by revealing a wider breadth of views and responses towards technology. Acknowledging these views can inform future e-learning strategies and lead to improvement in e-learning use in nurse education. © 2012 Blackwell Publishing Ltd.

  7. Deciphering Neural Codes of Memory during Sleep

    Science.gov (United States)

    Chen, Zhe; Wilson, Matthew A.

    2017-01-01

    Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699

  8. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  9. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau; Shihada, Basem; Pin-Han Ho

    2013-01-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However

  10. Game E-Learning Code Master Dengan Konsep Mmorpg Menggunakan Adobe Flex 3

    Directory of Open Access Journals (Sweden)

    Fredy Purnomo

    2010-12-01

    Full Text Available The research objective is to design a web-based e-learning game that could be used to be a learning facility of C language programming and as an online game so it could be enjoyed by everybody easily in internet. Flex usage in this game online is to implement RIA (Rich Internet Application concept in the game so e-learning process is hoped to be more interesting and interactive. E-learning game is also designed in MMORPG (Massively Multiplayer Online Role Playing Game concept. The research method used is analysis and design method. Analysis method is done through literature study, user analysis, and similar game analysis. Meanwhile, design method is about monitor display, gameplay and system design. The conclution of this research is that this game provides an interesting learning media of C language program accordingly to subject material at class and also easier to use through website. 

  11. How to review 4 million lines of ATLAS code

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00226135; The ATLAS collaboration; Lampl, Walter

    2017-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  12. How To Review 4 Million Lines of ATLAS Code

    CERN Document Server

    Stewart, Graeme; The ATLAS collaboration

    2016-01-01

    As the ATLAS Experiment prepares to move to a multi-threaded framework (AthenaMT) for Run3, we are faced with the problem of how to migrate 4 million lines of C++ source code. This code has been written over the past 15 years and has often been adapted, re-written or extended to the changing requirements and circumstances of LHC data taking. The code was developed by different authors, many of whom are no longer active, and under the deep assumption that processing ATLAS data would be done in a serial fashion. In order to understand the scale of the problem faced by the ATLAS software community, and to plan appropriately the significant efforts posed by the new AthenaMT framework, ATLAS embarked on a wide ranging review of our offline code, covering all areas of activity: event generation, simulation, trigger, reconstruction. We discuss the difficulties in even logistically organising such reviews in an already busy community, how to examine areas in sufficient depth to learn key areas in need of upgrade, yet...

  13. Computational Approaches Reveal New Insights into Regulation and Function of Non; coding RNAs and their Targets

    KAUST Repository

    Alam, Tanvir

    2016-11-28

    Regulation and function of protein-coding genes are increasingly well-understood, but no comparable evidence exists for non-coding RNA (ncRNA) genes, which appear to be more numerous than protein-coding genes. We developed a novel machine-learning model to distinguish promoters of long ncRNA (lncRNA) genes from those of protein-coding genes. This represents the first attempt to make this distinction based on properties of the associated gene promoters. From our analyses, several transcription factors (TFs), which are known to be regulated by lncRNAs, also emerged as potential global regulators of lncRNAs, suggesting that lncRNAs and TFs may participate in bidirectional feedback regulatory network. Our results also raise the possibility that, due to the historical dependence on protein-coding gene in defining the chromatin states of active promoters, an adjustment of these chromatin signature profiles to incorporate lncRNAs is warranted in the future. Secondly, we developed a novel method to infer functions for lncRNA and microRNA (miRNA) transcripts based on their transcriptional regulatory networks in 119 tissues and 177 primary cells of human. This method for the first time combines information of cell/tissueVspecific expression of a transcript and the TFs and transcription coVfactors (TcoFs) that control activation of that transcript. Transcripts were annotated using statistically enriched GO terms, pathways and diseases across cells/tissues and associated knowledgebase (FARNA) is developed. FARNA, having the most comprehensive function annotation of considered ncRNAs across the widest spectrum of cells/tissues, has a potential to contribute to our understanding of ncRNA roles and their regulatory mechanisms in human. Thirdly, we developed a novel machine-learning model to identify LD motif (a protein interaction motif) of paxillin, a ncRNA target that is involved in cell motility and cancer metastasis. Our recognition model identified new proteins not

  14. Derivation of the physical equations solved in the inertial confinement stability code DOC. Informal report

    International Nuclear Information System (INIS)

    Scannapieco, A.J.; Cranfill, C.W.

    1978-11-01

    There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code

  15. Derivation of the physical equations solved in the inertial confinement stability code DOC. Informal report

    Energy Technology Data Exchange (ETDEWEB)

    Scannapieco, A.J.; Cranfill, C.W.

    1978-11-01

    There now exists an inertial confinement stability code called DOC, which runs as a postprocessor. DOC (a code that has evolved from a previous code, PANSY) is a spherical harmonic linear stability code that integrates, in time, a set of Lagrangian perturbation equations. Effects due to real equations of state, asymmetric energy deposition, thermal conduction, shock propagation, and a time-dependent zeroth-order state are handled in the code. We present here a detailed derivation of the physical equations that are solved in the code.

  16. VIPRE-01: A thermal-hydraulic code for reactor cores

    International Nuclear Information System (INIS)

    Cuta, J.M.; Koontz, A.S.; Stewart, C.W.; Montgomery, S.D.; Nomura, K.K.

    1989-08-01

    The VIPRE-01 thermal hydraulics code for PWR and BWR analysis has undergone significant modifications and error correction. This manual for the updated code, designated as VIPRE-01 Mod-02, describes improvements that eliminate problems of slow convergence with the drift flux model in transient simulation. To update the VIPRE-01 code and its documentation the drift flux model of two-phase flow was implemented and error corrections developed during VIPRE-01 application were included. The project team modified the existing VIPRE-01 equations into drift flux model equations by developing additional terms. They also developed and implemented corrections for the errors identified during the last four years. They then validated the modified code against standard test data using selected test cases. The project team prepared documentation revisions reflecting code improvements and corrections to replace the corresponding sections in the original VIPRE documents. The revised VIPRE code, designated VIPRE-01 Mod-02, incorporates improvements that eliminate many shortcomings of the previous version. During the validation, the code produced satisfactory output compared with test data. The revised documentation is in the form of binder pages to replace existing pages in three of the original manuals

  17. Improvement of Parallel Algorithm for MATRA Code

    International Nuclear Information System (INIS)

    Kim, Seong-Jin; Seo, Kyong-Won; Kwon, Hyouk; Hwang, Dae-Hyun

    2014-01-01

    The feasibility study to parallelize the MATRA code was conducted in KAERI early this year. As a result, a parallel algorithm for the MATRA code has been developed to decrease a considerably required computing time to solve a bigsize problem such as a whole core pin-by-pin problem of a general PWR reactor and to improve an overall performance of the multi-physics coupling calculations. It was shown that the performance of the MATRA code was greatly improved by implementing the parallel algorithm using MPI communication. For problems of a 1/8 core and whole core for SMART reactor, a speedup was evaluated as about 10 when the numbers of used processor were 25. However, it was also shown that the performance deteriorated as the axial node number increased. In this paper, the procedure of a communication between processors is optimized to improve the previous parallel algorithm.. To improve the performance deterioration of the parallelized MATRA code, the communication algorithm between processors was newly presented. It was shown that the speedup was improved and stable regardless of the axial node number

  18. Fast H.264/AVC FRExt intra coding using belief propagation.

    Science.gov (United States)

    Milani, Simone

    2011-01-01

    In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.

  19. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex

    OpenAIRE

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and p...

  20. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  1. A robust fusion method for multiview distributed video coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Ascenso, Joao; Brites, Catarina

    2014-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the redundancy of the source (video) at the decoder side, as opposed to predictive coding, where the encoder leverages the redundancy. To exploit the correlation between views, multiview predictive video codecs require the encoder...... with a robust fusion system able to improve the quality of the fused SI along the decoding process through a learning process using already decoded data. We shall here take the approach to fuse the estimated distributions of the SIs as opposed to a conventional fusion algorithm based on the fusion of pixel...... values. The proposed solution is able to achieve gains up to 0.9 dB in Bjøntegaard difference when compared with the best-performing (in a RD sense) single SI DVC decoder, chosen as the best of an inter-view and a temporal SI-based decoder one....

  2. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  3. Genome-wide prediction of cis-regulatory regions using supervised deep learning methods.

    Science.gov (United States)

    Li, Yifeng; Shi, Wenqiang; Wasserman, Wyeth W

    2018-05-31

    In the human genome, 98% of DNA sequences are non-protein-coding regions that were previously disregarded as junk DNA. In fact, non-coding regions host a variety of cis-regulatory regions which precisely control the expression of genes. Thus, Identifying active cis-regulatory regions in the human genome is critical for understanding gene regulation and assessing the impact of genetic variation on phenotype. The developments of high-throughput sequencing and machine learning technologies make it possible to predict cis-regulatory regions genome wide. Based on rich data resources such as the Encyclopedia of DNA Elements (ENCODE) and the Functional Annotation of the Mammalian Genome (FANTOM) projects, we introduce DECRES based on supervised deep learning approaches for the identification of enhancer and promoter regions in the human genome. Due to their ability to discover patterns in large and complex data, the introduction of deep learning methods enables a significant advance in our knowledge of the genomic locations of cis-regulatory regions. Using models for well-characterized cell lines, we identify key experimental features that contribute to the predictive performance. Applying DECRES, we delineate locations of 300,000 candidate enhancers genome wide (6.8% of the genome, of which 40,000 are supported by bidirectional transcription data), and 26,000 candidate promoters (0.6% of the genome). The predicted annotations of cis-regulatory regions will provide broad utility for genome interpretation from functional genomics to clinical applications. The DECRES model demonstrates potentials of deep learning technologies when combined with high-throughput sequencing data, and inspires the development of other advanced neural network models for further improvement of genome annotations.

  4. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  5. Using Automatic Code Generation in the Attitude Control Flight Software Engineering Process

    Science.gov (United States)

    McComas, David; O'Donnell, James R., Jr.; Andrews, Stephen F.

    1999-01-01

    This paper presents an overview of the attitude control subsystem flight software development process, identifies how the process has changed due to automatic code generation, analyzes each software development phase in detail, and concludes with a summary of our lessons learned.

  6. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  7. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  8. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    Directory of Open Access Journals (Sweden)

    Alexander eHanuschkin

    2013-06-01

    Full Text Available Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: Random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, they allow for imitating arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions.Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird’s own song

  9. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    Science.gov (United States)

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  10. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...

  11. On locality of Generalized Reed-Muller codes over the broadcast erasure channel

    KAUST Repository

    Alloum, Amira

    2016-07-28

    One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcasting standard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network coding schemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based on previous schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of their locality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use of locally decodable codes in Broadcasting. © 2016 IEEE.

  12. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    Science.gov (United States)

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  13. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  14. Working research codes into fluid dynamics education: a science gateway approach

    Science.gov (United States)

    Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.

    2017-11-01

    Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).

  15. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  16. How are learning physics and student beliefs about learning physics connected? Measuring epistemological self-reflection in an introductory course and investigating its relationship to conceptual learning

    Science.gov (United States)

    May, David B.

    2002-11-01

    To explore students' epistemological beliefs in a variety of conceptual domains in physics, and in a specific and novel context of measurement, this Dissertation makes use of Weekly Reports, a class assignment in which students reflect in writing on what they learn each week and how they learn it. Reports were assigned to students in the introductory physics course for honors engineering majors at The Ohio State University in two successive years. The Weekly Reports of several students from the first year were analyzed for the kinds of epistemological beliefs exhibited therein, called epistemological self-reflection, and a coding scheme was developed for categorizing and quantifying this reflection. The connection between epistemological self-reflection and conceptual learning in physics seen in a pilot study was replicated in a larger study, in which the coded reflections from the Weekly Reports of thirty students were correlated with their conceptual learning gains. Although the total amount of epistemological self-reflection was not found to be related to conceptual gain, different kinds of epistemological self-reflection were. Describing learning physics concepts in terms of logical reasoning and making personal connections were positively correlated with gains; describing learning from authority figures or by observing phenomena without making inferences were negatively correlated. Linear regression equations were determined in order to quantify the effects on conceptual gain of specific ways of describing learning. In an experimental test of this model, the regression equations and the Weekly Report coding scheme developed from the first year's data were used to predict the conceptual gains of thirty students from the second year. The prediction was unsuccessful, possibly because these students were not given as much feedback on their reflections as were the first-year students. These results show that epistemological beliefs are important factors affecting

  17. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  18. The Implementation of Mobile Learning in Outdoor Education: Application of

    Science.gov (United States)

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  19. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  20. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  1. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  2. Probable relationship between partitions of the set of codons and the origin of the genetic code.

    Science.gov (United States)

    Salinas, Dino G; Gallardo, Mauricio O; Osorio, Manuel I

    2014-03-01

    Here we study the distribution of randomly generated partitions of the set of amino acid-coding codons. Some results are an application from a previous work, about the Stirling numbers of the second kind and triplet codes, both to the cases of triplet codes having four stop codons, as in mammalian mitochondrial genetic code, and hypothetical doublet codes. Extending previous results, in this work it is found that the most probable number of blocks of synonymous codons, in a genetic code, is similar to the number of amino acids when there are four stop codons, as well as it could be for a primigenious doublet code. Also it is studied the integer partitions associated to patterns of synonymous codons and it is shown, for the canonical code, that the standard deviation inside an integer partition is one of the most probable. We think that, in some early epoch, the genetic code might have had a maximum of the disorder or entropy, independent of the assignment between codons and amino acids, reaching a state similar to "code freeze" proposed by Francis Crick. In later stages, maybe deterministic rules have reassigned codons to amino acids, forming the natural codes, such as the canonical code, but keeping the numerical features describing the set partitions and the integer partitions, like a "fossil numbers"; both kinds of partitions about the set of amino acid-coding codons. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  4. Maybe it's not Python that sucks, maybe it's my code

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Did you know that in Python integers from -5 to 257 are preallocated? Reusing them 1000 times, instead of allocating memory for a bigger integer, saves a whopping 1 millisecond of code's execution time! Isn't that thrilling? Well, before you get that crazy, learn some basic performance tricks that you can start using today.

  5. Social Learning Network Analysis Model to Identify Learning Patterns Using Ontology Clustering Techniques and Meaningful Learning

    Science.gov (United States)

    Firdausiah Mansur, Andi Besse; Yusof, Norazah

    2013-01-01

    Clustering on Social Learning Network still not explored widely, especially when the network focuses on e-learning system. Any conventional methods are not really suitable for the e-learning data. SNA requires content analysis, which involves human intervention and need to be carried out manually. Some of the previous clustering techniques need…

  6. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    Energy Technology Data Exchange (ETDEWEB)

    Cerjan, Charles J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shi, Xizeng [Read-Rite Corporation, Fremont, CA (United States)

    2017-11-09

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executable code.

  7. New construction of quantum error-avoiding codes via group representation of quantum stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Hailin [Wenzhou University, College of Physics and Electronic Information Engineering, Wenzhou (China); Southeast University, National Mobile Communications Research Laboratory, Nanjing (China); Guilin University of Electronic Technology, Ministry of Education, Key Laboratory of Cognitive Radio and Information Processing, Guilin (China); Zhang, Zhongshan [University of Science and Technology Beijing, Beijing Engineering and Technology Research Center for Convergence Networks and Ubiquitous Services, Beijing (China); Chronopoulos, Anthony Theodore [University of Texas at San Antonio, Department of Computer Science, San Antonio, TX (United States)

    2017-10-15

    In quantum computing, nice error bases as generalization of the Pauli basis were introduced by Knill. These bases are known to be projective representations of finite groups. In this paper, we propose a group representation approach to the study of quantum stabilizer codes. We utilize this approach to define decoherence-free subspaces (DFSs). Unlike previous studies of DFSs, this type of DFSs does not involve any spatial symmetry assumptions on the system-environment interaction. Thus, it can be used to construct quantum error-avoiding codes (QEACs) that are fault tolerant automatically. We also propose a new simple construction of QEACs and subsequently develop several classes of QEACs. Finally, we present numerical simulation results encoding the logical error rate over physical error rate on the fidelity performance of these QEACs. Our study demonstrates that DFSs-based QEACs are capable of providing a generalized and unified framework for error-avoiding methods. (orig.)

  8. Robot Grasp Learning by Demonstration without Predefined Rules

    Directory of Open Access Journals (Sweden)

    César Fernández

    2011-12-01

    Full Text Available A learning-based approach to autonomous robot grasping is presented. Pattern recognition techniques are used to measure the similarity between a set of previously stored example grasps and all the possible candidate grasps for a new object. Two sets of features are defined in order to characterize grasps: point attributes describe the surroundings of a contact point; point-set attributes describe the relationship between the set of n contact points (assuming an n-fingered robot gripper is used. In the experiments performed, the nearest neighbour classifier outperforms other approaches like multilayer perceptrons, radial basis functions or decision trees, in terms of classification accuracy, while computational load is not excessive for a real time application (a grasp is fully synthesized in 0.2 seconds. The results obtained on a synthetic database show that the proposed system is able to imitate the grasping behaviour of the user (e.g. the system learns to grasp a mug by its handle. All the code has been made available for testing purposes.

  9. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  10. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  11. Learning Morse Code Alters Microstructural Properties in the Inferior Longitudinal Fasciculus: A DTI Study

    OpenAIRE

    Schlaffke, Lara; Leemans, Alexander; Schweizer, Lauren M.; Ocklenburg, Sebastian; Schmidt-Wilcke, Tobias

    2017-01-01

    Learning relies on neuroplasticity, which has mainly been studied in gray matter (GM). However, there is mounting evidence indicating a critical role of white matter changes involved in learning processes. One of the most important learning processes in human development is language acquisition. However, due to the length of this learning process, it has been notoriously difficult to investigate the underlying neuroplastic changes. Here, we report a novel learning paradigm to assess the role ...

  12. Predictive Coding Strategies for Developmental Neurorobotics

    Science.gov (United States)

    Park, Jun-Cheol; Lim, Jae Hyun; Choi, Hansol; Kim, Dae-Shik

    2012-01-01

    In recent years, predictive coding strategies have been proposed as a possible means by which the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal beliefs to the observed error between what it expects to happen and what actually happens. In this paper, we present a variety of developmental neurorobotics experiments in which minimalist prediction error-based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning, such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions. PMID:22586416

  13. Predictive Coding Strategies for Developmental Neurorobotics

    Directory of Open Access Journals (Sweden)

    Jun-Cheol ePark

    2012-05-01

    Full Text Available In recent years, predictive coding strategies have been proposed as a possible way of how the brain might make sense of the truly overwhelming amount of sensory data available to the brain at any given moment of time. Instead of the raw data, the brain is hypothesized to guide its actions by assigning causal believes to the observed error between what it expected to happen, and what actually happens. In this paper we present a potpourri of developmental neurorobotics experiments in which minimalist prediction-error based encoding strategies are utilize to elucidate the emergence of infant-like behavior in humanoid robotic platforms. Our approaches will be first naively Piagian, then move onto more Vygotskian ideas. More specifically, we will investigate how simple forms of infant learning such as motor sequence generation, object permanence, and imitation learning may arise if minimizing prediction errors are used as objective functions.

  14. Exploring the Relationships between Tutor Background, Tutor Training, and Student Learning: A Problem-Based Learning Meta-Analysis

    Science.gov (United States)

    Leary, Heather; Walker, Andrew; Shelton, Brett E.; Fitt, M. Harrison

    2013-01-01

    Despite years of primary research on problem-based learning and literature reviews, no systematic effort has been made to analyze the relationship between tutor characteristics and student learning outcomes. In an effort to fill that gap the following meta-analysis coded 223 outcomes from 94 studies with small but positive gains for PBL students…

  15. Lesson learned from the application to LOBI tests of CATHARE and RELAP5 codes

    International Nuclear Information System (INIS)

    Ambrosini, W.; D'Auria, F.; Galassi, G.M.

    1992-01-01

    The Dipt. di Costruzioni Meccaniche e Nucleari has participated to the LOBI project since its very beginning, contributing to almost all the international activities in this field, such as task group meetings, International Standards Problems, Seminars, etc. System codes like RELAP4/MOD6, RELAP5/MOD1, RELAP5/MOD1-EUR, RELAP5/MOD2, CATHARE 1 and CATHARE 2 were applied to the design and post test evaluation of a wide series of both LOBI/MOD1 and LOBI/MOD2 experiments, including Large Break LOCAs, Small and Intermediate Break LOCAs, long lasting transients and characterization tests. The LOBI data base demonstrated its usefulness in assessing capabilities and limitations of these codes and in qualifying a code use strategy. (author)

  16. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  17. Code domain steganography in video tracks

    Science.gov (United States)

    Rymaszewski, Sławomir

    2008-01-01

    This article is dealing with a practical method of hiding secret information in video stream. Method is dedicated for MPEG-2 stream. The algorithm takes to consider not only MPEG video coding scheme described in standard but also bits PES-packets encapsulation in MPEG-2 Program Stream (PS). This modification give higher capacity and more effective bit rate control for output stream than previously proposed methods.

  18. Colors and geometric forms in the work process information coding

    Directory of Open Access Journals (Sweden)

    Čizmić Svetlana

    2006-01-01

    Full Text Available The aim of the research was to establish the meaning of the colors and geometric shapes in transmitting information in the work process. The sample of 100 students connected 50 situations which could be associated with regular tasks in the work process with 12 colors and 4 geometric forms in previously chosen color. Based on chosen color-geometric shape-situation regulation, the idea of the research was to find out regularities in coding of information and to examine if those regularities can provide meaningful data assigned to each individual code and to explain which codes are better and applicable represents of examined situations.

  19. Morphology Independent Learning in Modular Robots

    DEFF Research Database (Denmark)

    Christensen, David Johan; Bordignon, Mirko; Schultz, Ulrik Pagh

    2009-01-01

    Hand-coding locomotion controllers for modular robots is difficult due to their polymorphic nature. Instead, we propose to use a simple and distributed reinforcement learning strategy. ATRON modules with identical controllers can be assembled in any configuration. To optimize the robot’s locomotion...... speed its modules independently and in parallel adjust their behavior based on a single global reward signal. In simulation, we study the learning strategy’s performance on different robot configurations. On the physical platform, we perform learning experiments with ATRON robots learning to move as fast...

  20. Insect olfactory coding and memory at multiple timescales.

    Science.gov (United States)

    Gupta, Nitin; Stopfer, Mark

    2011-10-01

    Insects can learn, allowing them great flexibility for locating seasonal food sources and avoiding wily predators. Because insects are relatively simple and accessible to manipulation, they provide good experimental preparations for exploring mechanisms underlying sensory coding and memory. Here we review how the intertwining of memory with computation enables the coding, decoding, and storage of sensory experience at various stages of the insect olfactory system. Individual parts of this system are capable of multiplexing memories at different timescales, and conversely, memory on a given timescale can be distributed across different parts of the circuit. Our sampling of the olfactory system emphasizes the diversity of memories, and the importance of understanding these memories in the context of computations performed by different parts of a sensory system. Published by Elsevier Ltd.

  1. Orthopedics coding and funding.

    Science.gov (United States)

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  2. Self-regulated learning processes of medical students during an academic learning task.

    Science.gov (United States)

    Gandomkar, Roghayeh; Mirzazadeh, Azim; Jalili, Mohammad; Yazdani, Kamran; Fata, Ladan; Sandars, John

    2016-10-01

    This study was designed to identify the self-regulated learning (SRL) processes of medical students during a biomedical science learning task and to examine the associations of the SRL processes with previous performance in biomedical science examinations and subsequent performance on a learning task. A sample of 76 Year 1 medical students were recruited based on their performance in biomedical science examinations and stratified into previous high and low performers. Participants were asked to complete a biomedical science learning task. Participants' SRL processes were assessed before (self-efficacy, goal setting and strategic planning), during (metacognitive monitoring) and after (causal attributions and adaptive inferences) their completion of the task using an SRL microanalytic interview. Descriptive statistics were used to analyse the means and frequencies of SRL processes. Univariate and multiple logistic regression analyses were conducted to examine the associations of SRL processes with previous examination performance and the learning task performance. Most participants (from 88.2% to 43.4%) reported task-specific processes for SRL measures. Students who exhibited higher self-efficacy (odds ratio [OR] 1.44, 95% confidence interval [CI] 1.09-1.90) and reported task-specific processes for metacognitive monitoring (OR 6.61, 95% CI 1.68-25.93) and causal attributions (OR 6.75, 95% CI 2.05-22.25) measures were more likely to be high previous performers. Multiple analysis revealed that similar SRL measures were associated with previous performance. The use of task-specific processes for causal attributions (OR 23.00, 95% CI 4.57-115.76) and adaptive inferences (OR 27.00, 95% CI 3.39-214.95) measures were associated with being a high learning task performer. In multiple analysis, only the causal attributions measure was associated with high learning task performance. Self-efficacy, metacognitive monitoring and causal attributions measures were associated

  3. Learning ActionScript 30

    CERN Document Server

    Shupe, Rich

    2010-01-01

    If you're new to ActionScript 3.0, or want to enhance your skill set, this bestselling book is the ideal guide. Designers, developers, and programmers alike will find Learning ActionScript 3.0 invaluable for navigating ActionScript 3.0's learning curve. You'll learn the language by getting a clear look at essential topics such as logic, event handling, displaying content, classes, and much more. Updated for Flash Professional CS5, this revised and expanded edition delivers hands-on exercises and full-color code samples to help you increase your abilities as you progress through the book. Top

  4. Improved lossless intra coding for H.264/MPEG-4 AVC.

    Science.gov (United States)

    Lee, Yung-Lyul; Han, Ki-Hun; Sullivan, Gary J

    2006-09-01

    A new lossless intra coding method based on sample-by-sample differential pulse code modulation (DPCM) is presented as an enhancement of the H.264/MPEG-4 AVC standard. The H.264/AVC design includes a multidirectional spatial prediction method to reduce spatial redundancy by using neighboring samples as a prediction for the samples in a block of data to be encoded. In the new lossless intra coding method, the spatial prediction is performed based on samplewise DPCM instead of in the block-based manner used in the current H.264/AVC standard, while the block structure is retained for the residual difference entropy coding process. We show that the new method, based on samplewise DPCM, does not have a major complexity penalty, despite its apparent pipeline dependencies. Experiments show that the new lossless intra coding method reduces the bit rate by approximately 12% in comparison with the lossless intra coding method previously included in the H.264/AVC standard. As a result, the new method is currently being adopted into the H.264/AVC standard in a new enhancement project.

  5. Development and validation of ALEPH Monte Carlo burn-up code

    International Nuclear Information System (INIS)

    Stankovskiy, A.; Van den Eynde, G.; Vidmar, T.

    2011-01-01

    The Monte-Carlo burn-up code ALEPH is being developed in SCK-CEN since 2004. Belonging to the category of shells coupling Monte Carlo transport (MCNP or MCNPX) and 'deterministic' depletion codes (ORIGEN-2.2), ALEPH possess some unique features that distinguish it from other codes. The most important feature is full data consistency between steady-state Monte Carlo and time-dependent depletion calculations. Recent improvements of ALEPH concern full implementation of general-purpose nuclear data libraries (JEFF-3.1.1, ENDF/B-VII, JENDL-3.3). The upgraded version of the code is capable to treat isomeric branching ratios, neutron induced fission product yields, spontaneous fission yields and energy release per fission recorded in ENDF-formatted data files. The alternative algorithm for time evolution of nuclide concentrations is added. A predictor-corrector mechanism and the calculation of nuclear heating are available as well. The validation of the code on REBUS experimental programme results has been performed. The upgraded version of ALEPH has shown better agreement with measured data than other codes, including previous version of ALEPH. (authors)

  6. Examining the linguistic coding differences hypothesis to explain individual differences in foreign language learning.

    Science.gov (United States)

    Sparks, R L

    1995-01-01

    In this paper, it is suggested that foreign language learning problems result from difficulties with native language learning and hypothesized that difficulties with phonological processing may be the locus of foreign language learning difficulties for some poor foreign language learners. Evidence is described that supports these positions. It is argued that conceptualizing foreign language learning problems as alanguage problem allows researchers to more clearly specify deficits related to the learning of a foreign language. Research evidence which shows that good and poor foreign language learners exhibit significantly different levels of native language skill and phonological processing is summarized. Finally, potential challenges to my hypotheses as an explanation for foreign language learning problems are reviewed.

  7. Perceptual learning of acoustic noise generates memory-evoked potentials.

    Science.gov (United States)

    Andrillon, Thomas; Kouider, Sid; Agus, Trevor; Pressnitzer, Daniel

    2015-11-02

    Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  9. Adaptive variable-length coding for efficient compression of spacecraft television data.

    Science.gov (United States)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  10. The semaphore codes attached to a Turing machine via resets and their various limits

    OpenAIRE

    Rhodes, John; Schilling, Anne; Silva, Pedro V.

    2016-01-01

    We introduce semaphore codes associated to a Turing machine via resets. Semaphore codes provide an approximation theory for resets. In this paper we generalize the set-up of our previous paper "Random walks on semaphore codes and delay de Bruijn semigroups" to the infinite case by taking the profinite limit of $k$-resets to obtain $(-\\omega)$-resets. We mention how this opens new avenues to attack the P versus NP problem.

  11. Instance-based Policy Learning by Real-coded Genetic Algorithms and Its Application to Control of Nonholonomic Systems

    Science.gov (United States)

    Miyamae, Atsushi; Sakuma, Jun; Ono, Isao; Kobayashi, Shigenobu

    The stabilization control of nonholonomic systems have been extensively studied because it is essential for nonholonomic robot control problems. The difficulty in this problem is that the theoretical derivation of control policy is not necessarily guaranteed achievable. In this paper, we present a reinforcement learning (RL) method with instance-based policy (IBP) representation, in which control policies for this class are optimized with respect to user-defined cost functions. Direct policy search (DPS) is an approach for RL; the policy is represented by parametric models and the model parameters are directly searched by optimization techniques including genetic algorithms (GAs). In IBP representation an instance consists of a state and an action pair; a policy consists of a set of instances. Several DPSs with IBP have been previously proposed. In these methods, sometimes fail to obtain optimal control policies when state-action variables are continuous. In this paper, we present a real-coded GA for DPSs with IBP. Our method is specifically designed for continuous domains. Optimization of IBP has three difficulties; high-dimensionality, epistasis, and multi-modality. Our solution is designed for overcoming these difficulties. The policy search with IBP representation appears to be high-dimensional optimization; however, instances which can improve the fitness are often limited to active instances (instances used for the evaluation). In fact, the number of active instances is small. Therefore, we treat the search problem as a low dimensional problem by restricting search variables only to active instances. It has been commonly known that functions with epistasis can be efficiently optimized with crossovers which satisfy the inheritance of statistics. For efficient search of IBP, we propose extended crossover-like mutation (extended XLM) which generates a new instance around an instance with satisfying the inheritance of statistics. For overcoming multi-modality, we

  12. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  13. "It's like we're grasping at anything": caregivers' education needs and preferred learning methods.

    Science.gov (United States)

    Mastel-Smith, Beth; Stanley-Hermanns, Melinda

    2012-07-01

    In this qualitative descriptive study, we explored caregivers' educational needs and preferred methods of information delivery. Descriptions are based on five focus groups (N = 29) conducted with ethnically diverse, current and past family caregivers, including those who had previously attended a structured educational program. Themes arose from verbatim data transcriptions and coded themes. Four categories of educational needs were identified: (a) respite, (b) caregiving essentials, (c) self-care, and (d) the emotional aspects of caregiving. Advantages and disadvantages of learning methods are discussed, along with reasons for and outcomes of attending caregiver workshops. An informed caregiver model is proposed. Health care providers must assess educational needs and strive to provide appropriate information as dictated by the care recipient's condition and caregiver's expressed desires. Innovative methods of delivering information that are congruent with different caregiving circumstances and learning preferences must be developed and tested.

  14. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  15. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  16. Minimal-memory realization of pearl-necklace encoders of general quantum convolutional codes

    International Nuclear Information System (INIS)

    Houshmand, Monireh; Hosseini-Khayat, Saied

    2011-01-01

    Quantum convolutional codes, like their classical counterparts, promise to offer higher error correction performance than block codes of equivalent encoding complexity, and are expected to find important applications in reliable quantum communication where a continuous stream of qubits is transmitted. Grassl and Roetteler devised an algorithm to encode a quantum convolutional code with a ''pearl-necklace'' encoder. Despite their algorithm's theoretical significance as a neat way of representing quantum convolutional codes, it is not well suited to practical realization. In fact, there is no straightforward way to implement any given pearl-necklace structure. This paper closes the gap between theoretical representation and practical implementation. In our previous work, we presented an efficient algorithm to find a minimal-memory realization of a pearl-necklace encoder for Calderbank-Shor-Steane (CSS) convolutional codes. This work is an extension of our previous work and presents an algorithm for turning a pearl-necklace encoder for a general (non-CSS) quantum convolutional code into a realizable quantum convolutional encoder. We show that a minimal-memory realization depends on the commutativity relations between the gate strings in the pearl-necklace encoder. We find a realization by means of a weighted graph which details the noncommutative paths through the pearl necklace. The weight of the longest path in this graph is equal to the minimal amount of memory needed to implement the encoder. The algorithm has a polynomial-time complexity in the number of gate strings in the pearl-necklace encoder.

  17. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  18. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  19. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  20. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  1. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Improved predictions of nuclear reaction rates with the TALYS reaction code for astrophysical applications

    International Nuclear Information System (INIS)

    Goriely, S.; Hilaire, S.; Koning, A.J

    2008-01-01

    Context. Nuclear reaction rates of astrophysical applications are traditionally determined on the basis of Hauser-Feshbach reaction codes. These codes adopt a number of approximations that have never been tested, such as a simplified width fluctuation correction, the neglect of delayed or multiple-particle emission during the electromagnetic decay cascade, or the absence of the pre-equilibrium contribution at increasing incident energies. Aims. The reaction code TALYS has been recently updated to estimate the Maxwellian-averaged reaction rates that are of astrophysical relevance. These new developments enable the reaction rates to be calculated with increased accuracy and reliability and the approximations of previous codes to be investigated. Methods. The TALYS predictions for the thermonuclear rates of relevance to astrophysics are detailed and compared with those derived by widely-used codes for the same nuclear ingredients. Results. It is shown that TALYS predictions may differ significantly from those of previous codes, in particular for nuclei for which no or little nuclear data is available. The pre-equilibrium process is shown to influence the astrophysics rates of exotic neutron-rich nuclei significantly. For the first time, the Maxwellian- averaged (n, 2n) reaction rate is calculated for all nuclei and its competition with the radiative capture rate is discussed. Conclusions. The TALYS code provides a new tool to estimate all nuclear reaction rates of relevance to astrophysics with improved accuracy and reliability. (authors)

  3. A Latin Functionalist Dictionary as a Self-Learning Language Device: Previous Experiences to Digitalization

    Science.gov (United States)

    Márquez, Manuel; Chaves, Beatriz

    2016-01-01

    The application of a methodology based on S.C. Dik's Functionalist Grammar linguistic principles, which is addressed to the teaching of Latin to secondary students, has resulted in a quantitative improvement in students' acquisition process of knowledge. To do so, we have used a self-learning tool, an ad hoc dictionary, of which the use in…

  4. A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes

    Energy Technology Data Exchange (ETDEWEB)

    Bravenec, Ronald [Fourth State Research, Austin, TX (United States)

    2017-11-14

    My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less than half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.

  5. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  6. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  7. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  8. Short-lived non-coding transcripts (SLiTs): Clues to regulatory long non-coding RNA.

    Science.gov (United States)

    Tani, Hidenori

    2017-03-22

    Whole transcriptome analyses have revealed a large number of novel long non-coding RNAs (lncRNAs). Although the importance of lncRNAs has been documented in previous reports, the biological and physiological functions of lncRNAs remain largely unknown. The role of lncRNAs seems an elusive problem. Here, I propose a clue to the identification of regulatory lncRNAs. The key point is RNA half-life. RNAs with a long half-life (t 1/2 > 4 h) contain a significant proportion of ncRNAs, as well as mRNAs involved in housekeeping functions, whereas RNAs with a short half-life (t 1/2 regulatory ncRNAs and regulatory mRNAs. This novel class of ncRNAs with a short half-life can be categorized as Short-Lived non-coding Transcripts (SLiTs). I consider that SLiTs are likely to be rich in functionally uncharacterized regulatory RNAs. This review describes recent progress in research into SLiTs.

  9. Structure and operation of the ITS code system

    International Nuclear Information System (INIS)

    Halbleib, J.

    1988-01-01

    The TIGER series of time-independent coupled electron-photon Monte Carlo transport codes is a group of multimaterial and multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron-photon cascade by combining microscopic photon transport with a macroscopic random walk for electron transport. Major contributors to its evolution are listed. The author and his associates are primarily code users rather than code developers, and have borrowed freely from existing work wherever possible. Nevertheless, their efforts have resulted in various software packages for describing the production and transport of the electron-photon cascade that they found sufficiently useful to warrant dissemination through the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory. The ITS system (Integrated TIGER Series) represents the organization and integration of this combined software, along with much additional capability from previously unreleased work, into a single convenient package of exceptional user friendliness and portability. Emphasis is on simplicity and flexibility of application without sacrificing the rigor or sophistication of the physical model

  10. Geoethics: what can we learn from existing bio-, ecological, and engineering ethics codes?

    Science.gov (United States)

    Kieffer, Susan W.; Palka, John

    2014-05-01

    Many scientific disciplines are concerned about ethics, and codes of ethics for these professions exist, generally through the professional scientific societies such as the American Geophysical Union (AGU), American Geological Institute (AGI), American Association of Petroleum Engineers (AAPE), National Society of Professional Engineers (NSPE), Ecological Society of America (ESA), and many others worldwide. These vary considerably in depth and specificity. In this poster, we review existing codes with the goal of extracting fundamentals that should/can be broadly applied to all geo-disciplines. Most of these codes elucidate a set of principles that cover practical issues such as avoiding conflict of interest, avoiding plagiarism, not permitting illegitimate use of intellectual products, enhancing the prestige of the profession, acknowledging an obligation to perform services only in areas of competence, issuing public statements only in an objective manner, holding paramount the welfare of the public, and in general conducting oneself honorably, responsibly, and lawfully. It is striking that, given that the work of these societies and their members is relevant to the future of the earth, few discuss in any detail ethical obligations regarding our relation to the planet itself. The AGU code, for example, only states that "Members have an ethical obligation to weigh the societal benefits of their research against the costs and risks to human and animal welfare and impacts on the environment and society." The NSPE and AGI codes go somewhat further: "Engineers are encouraged to adhere to the principles of sustainable development in order to protect the environment for future generations," and "Geoscientists should strive to protect our natural environment. They should understand and anticipate the environmental consequences of their work and should disclose the consequences of recommended actions. They should acknowledge that resource extraction and use are necessary

  11. Can Strategies Facilitate Learning from Illustrated Science Texts?

    Science.gov (United States)

    Iding, Marie K.

    2000-01-01

    Examines the effectiveness of schema training in illustration types and text-illustration relations for learning from college level physiology texts and discusses findings that are consistent with prior research on learning from illustrated materials and with dual coding theory. Considers future directions for strategy training research and…

  12. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Caruso, R.

    1997-07-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especially faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the `user effect` is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices).

  13. Summary of papers on current and anticipated uses of thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Caruso, R.

    1997-01-01

    The author reviews a range of recent papers which discuss possible uses and future development needs for thermal/hydraulic codes in the nuclear industry. From this review, eight common recommendations are extracted. They are: improve the user interface so that more people can use the code, so that models are easier and less expensive to prepare and maintain, and so that the results are scrutable; design the code so that it can easily be coupled to other codes, such as core physics, containment, fission product behaviour during severe accidents; improve the numerical methods to make the code more robust and especially faster running, particularly for low pressure transients; ensure that future code development includes assessment of code uncertainties as integral part of code verification and validation; provide extensive user guidelines or structure the code so that the 'user effect' is minimized; include the capability to model multiple fluids (gas and liquid phase); design the code in a modular fashion so that new models can be added easily; provide the ability to include detailed or simplified component models; build on work previously done with other codes (RETRAN, RELAP, TRAC, CATHARE) and other code validation efforts (CSAU, CSNI SET and IET matrices)

  14. Hello World Deep Learning in Medical Imaging.

    Science.gov (United States)

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  15. Students’ Perceptions of Learning Mathematics With Cellular Phones and Applets

    Directory of Open Access Journals (Sweden)

    Wajeeh M. Daher

    2009-03-01

    Full Text Available This paper describes the perceptions of middle school students regarding learning mathematics with cellular phones and web applets, their perceptions regarding the differences between these two electronic devices and their preferences regarding using the devices in learning mathematics. To analyze these perceptions I used the grounded theory approach which involves: open coding, axial coding, and selective coding, where the unit of analysis was the sentence in each of the interviews. The research findings imply that the participants perceived different aspects of both of the electronic devices: the availability of the device, the collaboration aspect, the communication aspect, the size of the device, and the swiftness of working with the device. These aspects influenced the participants’ decisions when, where and how to use each of the devices for the learning of mathematics. More participants preferred the cellular phone over the applet primarily for its small size which makes easy its portability as well as for its communication facilities.

  16. Influence of the workplace on learning physical examination skills.

    Science.gov (United States)

    Duvivier, Robbert; Stalmeijer, Renée; van Dalen, Jan; van der Vleuten, Cees; Scherpbier, Albert

    2014-03-28

    Hospital clerkships are considered crucial for acquiring competencies such as diagnostic reasoning and clinical skills. The actual learning process in the hospital remains poorly understood. This study investigates how students learn clinical skills in workplaces and factors affecting this. Six focus group sessions with 32 students in Internal Medicine rotation (4-9 students per group; sessions 80-90 minutes). Verbatim transcripts were analysed by emerging themes and coded independently by three researchers followed by constant comparison and axial coding. Students report to learn the systematics of the physical examination, gain agility and become able to recognise pathological signs. The learning process combines working alongside others and working independently with increasing responsibility for patient care. Helpful behaviour includes making findings explicit through patient files or during observation, feedback by abnormal findings and taking initiative. Factors affecting the process negatively include lack of supervision, uncertainty about tasks and expectations, and social context such as hierarchy of learners and perceived learning environment. Although individual student experiences vary greatly between different hospitals, it seems that proactivity and participation are central drivers for learning. These results can improve the quality of existing programmes and help design new ways to learn physical examination skills.

  17. Machine learning and medical imaging

    CERN Document Server

    Shen, Dinggang; Sabuncu, Mert

    2016-01-01

    Machine Learning and Medical Imaging presents state-of- the-art machine learning methods in medical image analysis. It first summarizes cutting-edge machine learning algorithms in medical imaging, including not only classical probabilistic modeling and learning methods, but also recent breakthroughs in deep learning, sparse representation/coding, and big data hashing. In the second part leading research groups around the world present a wide spectrum of machine learning methods with application to different medical imaging modalities, clinical domains, and organs. The biomedical imaging modalities include ultrasound, magnetic resonance imaging (MRI), computed tomography (CT), histology, and microscopy images. The targeted organs span the lung, liver, brain, and prostate, while there is also a treatment of examining genetic associations. Machine Learning and Medical Imaging is an ideal reference for medical imaging researchers, industry scientists and engineers, advanced undergraduate and graduate students, a...

  18. PIPE STRESS and VERPIP codes for stress analysis and verifications of PEC reactor piping

    International Nuclear Information System (INIS)

    Cesari, F.; Ferranti, P.; Gasparrini, M.; Labanti, L.

    1975-01-01

    To design LMFBR piping systems following ASME Sct. III requirements unusual flexibility computer codes are to be adopted to consider piping and its guard-tube. For this purpose PIPE STRESS code previously prepared by Southern-Service, has been modified. Some subroutine for detailed stress analysis and principal stress calculations on all the sections of piping have been written and fitted in the code. Plotter can also be used. VERPIP code for automatic verifications of piping as class 1 Sct. III prescriptions has been also prepared. The results of PIPE STRESS and VERPIP codes application to PEC piping are in section III of this report

  19. Indiana secondary students' evolution learning experiences and demarcations of science from non-science

    Science.gov (United States)

    Donnelly, Lisa A.

    2007-12-01

    Previous research has documented students' conceptual difficulties learning evolution and how student learning may be related to students' views of evolution and science. This mixed methods study addressed how 74 high school biology students from six Indiana high schools viewed their evolution learning experiences, the demarcations of science from non-science, and evolution understanding and acceptance. Data collection entailed qualitative and quantitative methods including interviews, classroom observations, surveys, and assessments to address students' views of science and non-science, evolution learning experiences, and understanding and acceptance of evolution. Qualitative coding generated several demarcation and evolution learning experience codes that were subsequently used in quantitative comparisons of evolution understanding and acceptance. The majority of students viewed science as empirical, tentative but ultimately leading to certain truth, compatible with religion, the product of experimental work, and the product of human creativity. None of the students offered the consensus NOS view that scientific theories are substantiated explanations of phenomena while scientific laws state relationships or patterns between phenomena. About half the students indicated that scientific knowledge was subjectively and socio-culturally influenced. The majority of students also indicated that they had positive evolution learning experiences and thought evolution should be taught in secondary school. The quantitative comparisons revealed how students who viewed scientific knowledge as subjectively and socio-culturally influenced had higher understanding than their peers. Furthermore, students who maintained that science and religion were compatible did not differ with respect to understanding but had higher acceptance than their peers who viewed science and religion as conflicting. Furthermore, students who maintained that science must be consistent with their

  20. Learning sparse generative models of audiovisual signals

    OpenAIRE

    Monaci, Gianluca; Sommer, Friedrich T.; Vandergheynst, Pierre

    2008-01-01

    This paper presents a novel framework to learn sparse represen- tations for audiovisual signals. An audiovisual signal is modeled as a sparse sum of audiovisual kernels. The kernels are bimodal functions made of synchronous audio and video components that can be positioned independently and arbitrarily in space and time. We design an algorithm capable of learning sets of such audiovi- sual, synchronous, shift-invariant functions by alternatingly solving a coding and a learning pr...

  1. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  2. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  3. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  4. Coarse-coded higher-order neural networks for PSRI object recognition. [position, scale, and rotation invariant

    Science.gov (United States)

    Spirkovska, Lilly; Reid, Max B.

    1993-01-01

    A higher-order neural network (HONN) can be designed to be invariant to changes in scale, translation, and inplane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Consequently, fewer training passes and a smaller training set are required to learn to distinguish between objects. The size of the input field is limited, however, because of the memory required for the large number of interconnections in a fully connected HONN. By coarse coding the input image, the input field size can be increased to allow the larger input scenes required for practical object recognition problems. We describe a coarse coding technique and present simulation results illustrating its usefulness and its limitations. Our simulations show that a third-order neural network can be trained to distinguish between two objects in a 4096 x 4096 pixel input field independent of transformations in translation, in-plane rotation, and scale in less than ten passes through the training set. Furthermore, we empirically determine the limits of the coarse coding technique in the object recognition domain.

  5. Assessing Assessment: In Pursuit of Meaningful Learning

    Science.gov (United States)

    Rootman-le Grange, Ilse; Blackie, Margaret A. L.

    2018-01-01

    The challenge of supporting the development of meaningful learning is prevalent in chemistry education research. One of the core activities used in the learning process is assessments. The aim of this paper is to illustrate how the semantics dimension of Legitimation Code Theory can be a helpful tool to critique the quality of assessments and…

  6. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  7. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  8. Multimedia Learning: Beyond Modality. Commentary.

    Science.gov (United States)

    Reimann, P.

    2003-01-01

    Identifies and summarizes instructional messages in the articles in this theme issue and also identifies central theoretical issues, focusing on: (1) external representations; (2) dual coding theory; and (3) the effects of animations on learning. (SLD)

  9. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  10. Decoding small surface codes with feedforward neural networks

    Science.gov (United States)

    Varsamopoulos, Savvas; Criger, Ben; Bertels, Koen

    2018-01-01

    Surface codes reach high error thresholds when decoded with known algorithms, but the decoding time will likely exceed the available time budget, especially for near-term implementations. To decrease the decoding time, we reduce the decoding problem to a classification problem that a feedforward neural network can solve. We investigate quantum error correction and fault tolerance at small code distances using neural network-based decoders, demonstrating that the neural network can generalize to inputs that were not provided during training and that they can reach similar or better decoding performance compared to previous algorithms. We conclude by discussing the time required by a feedforward neural network decoder in hardware.

  11. SU-A-210-01: Why Should We Learn Radiation Oncology Billing?

    International Nuclear Information System (INIS)

    Wu, H.

    2015-01-01

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’s long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the

  12. SU-A-210-01: Why Should We Learn Radiation Oncology Billing?

    Energy Technology Data Exchange (ETDEWEB)

    Wu, H. [Willis-Knighton Medical Center (United States)

    2015-06-15

    The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’s long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the

  13. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  14. SCORCH - a zero dimensional plasma evolution and transport code for use in small and large tokamak systems

    International Nuclear Information System (INIS)

    Clancy, B.E.; Cook, J.L.

    1984-12-01

    The zero-dimensional code SCORCH determines number density and temperature evolution in plasmas using concepts derived from the Hinton and Hazeltine transport theory. The code uses the previously reported ADL-1 data library

  15. The conditions that promote fear learning: prediction error and Pavlovian fear conditioning.

    Science.gov (United States)

    Li, Susan Shi Yuan; McNally, Gavan P

    2014-02-01

    A key insight of associative learning theory is that learning depends on the actions of prediction error: a discrepancy between the actual and expected outcomes of a conditioning trial. When positive, such error causes increments in associative strength and, when negative, such error causes decrements in associative strength. Prediction error can act directly on fear learning by determining the effectiveness of the aversive unconditioned stimulus or indirectly by determining the effectiveness, or associability, of the conditioned stimulus. Evidence from a variety of experimental preparations in human and non-human animals suggest that discrete neural circuits code for these actions of prediction error during fear learning. Here we review the circuits and brain regions contributing to the neural coding of prediction error during fear learning and highlight areas of research (safety learning, extinction, and reconsolidation) that may profit from this approach to understanding learning. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  16. An interplay of fusiform gyrus and hippocampus enables prototype- and exemplar-based category learning.

    Science.gov (United States)

    Lech, Robert K; Güntürkün, Onur; Suchan, Boris

    2016-09-15

    The aim of the present study was to examine the contributions of different brain structures to prototype- and exemplar-based category learning using functional magnetic resonance imaging (fMRI). Twenty-eight subjects performed a categorization task in which they had to assign prototypes and exceptions to two different families. This test procedure usually produces different learning curves for prototype and exception stimuli. Our behavioral data replicated these previous findings by showing an initially superior performance for prototypes and typical stimuli and a switch from a prototype-based to an exemplar-based categorization for exceptions in the later learning phases. Since performance varied, we divided participants into learners and non-learners. Analysis of the functional imaging data revealed that the interaction of group (learners vs. non-learners) and block (Block 5 vs. Block 1) yielded an activation of the left fusiform gyrus for the processing of prototypes, and an activation of the right hippocampus for exceptions after learning the categories. Thus, successful prototype- and exemplar-based category learning is associated with activations of complementary neural substrates that constitute object-based processes of the ventral visual stream and their interaction with unique-cue representations, possibly based on sparse coding within the hippocampus. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Underworld - Bringing a Research Code to the Classroom

    Science.gov (United States)

    Moresi, L. N.; Mansour, J.; Giordani, J.; Farrington, R.; Kaluza, O.; Quenette, S.; Woodcock, R.; Squire, G.

    2017-12-01

    While there are many reasons to celebrate the passing of punch card programming and flickering green screens,the loss of the sense of wonder at the very existence of computers and the calculations they make possible shouldnot be numbered among them. Computers have become so familiar that students are often unaware that formal and careful design of algorithms andtheir implementations remains a valuable and important skill that has to be learned and practiced to achieveexpertise and genuine understanding. In teaching geodynamics and geophysics at undergraduate level, we aimed to be able to bring our researchtools into the classroom - even when those tools are advanced, parallel research codes that we typically deploy on hundredsor thousands of processors, and we wanted to teach not just the physical concepts that are modelled by these codes but asense of familiarity with computational modelling and the ability to discriminate a reliable model from a poor one. The underworld code (www.underworldcode.org) was developed for modelling plate-scale fluid mechanics and studyingproblems in lithosphere dynamics. Though specialised for this task, underworld has a straightforwardpython user interface that allows it to run within the environment of jupyter notebooks on a laptop (at modest resolution, of course).The python interface was developed for adaptability in addressing new research problems, but also lends itself to integration intoa python-driven learning environment. To manage the heavy demands of installing and running underworld in a teaching laboratory, we have developed a workflow in whichwe install docker containers in the cloud which support a number of students to run their own environment independently. We share ourexperience blending notebooks and static webpages into a single web environment, and we explain how we designed our graphics andanalysis tools to allow notebook "scripts" to be queued and run on a supercomputer.

  18. New Zealand Summer of Code/Summer of Technology: an industry, student and tertiary engagement

    OpenAIRE

    Komisarczuk, Peter; Clegg, John; McDavitt, Ruth; Linton, Andy

    2011-01-01

    In 2006 the Wellington Summer of Code was brought to life engaging ICT undergraduates with innovative Wellington employers, it has developed into a thriving talent pipeline engaging all levels of tertiary students and industry in the Wellington region. Summer of Code engages students during term time through industry led learning and a summer seminar and workshop series that are open to all. It has worked with the NZCS to integrate the Evening with Industry where undergraduates see young IT p...

  19. Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model

    DEFF Research Database (Denmark)

    Cramer, Ronald; Damgård, Ivan Bjerre; Döttling, Nico

    Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventuall...... non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).......Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually...... abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However...

  20. A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students

    Science.gov (United States)

    Biasutti, Michele

    2017-01-01

    The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…

  1. GRADSPMHD: A parallel MHD code based on the SPH formalism

    Science.gov (United States)

    Vanaverbeke, S.; Keppens, R.; Poedts, S.

    2014-03-01

    We present GRADSPMHD, a completely Lagrangian parallel magnetohydrodynamics code based on the SPH formalism. The implementation of the equations of SPMHD in the “GRAD-h” formalism assembles known results, including the derivation of the discretized MHD equations from a variational principle, the inclusion of time-dependent artificial viscosity, resistivity and conductivity terms, as well as the inclusion of a mixed hyperbolic/parabolic correction scheme for satisfying the ∇ṡB→ constraint on the magnetic field. The code uses a tree-based formalism for neighbor finding and can optionally use the tree code for computing the self-gravity of the plasma. The structure of the code closely follows the framework of our parallel GRADSPH FORTRAN 90 code which we added previously to the CPC program library. We demonstrate the capabilities of GRADSPMHD by running 1, 2, and 3 dimensional standard benchmark tests and we find good agreement with previous work done by other researchers. The code is also applied to the problem of simulating the magnetorotational instability in 2.5D shearing box tests as well as in global simulations of magnetized accretion disks. We find good agreement with available results on this subject in the literature. Finally, we discuss the performance of the code on a parallel supercomputer with distributed memory architecture. Catalogue identifier: AERP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERP_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 620503 No. of bytes in distributed program, including test data, etc.: 19837671 Distribution format: tar.gz Programming language: FORTRAN 90/MPI. Computer: HPC cluster. Operating system: Unix. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: ˜30 MB for a

  2. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  3. Code comparison results for the loft LP-FP-2 experiment

    International Nuclear Information System (INIS)

    Merilo, M.; Mecham, D.C.

    1991-01-01

    Computer code calculations are compared with thermal hydraulic and fission product release, transport, and deposition data obtained from the OECD-LOFT LP-FP-2 experiment. Except for the MAAP code, which is a fully integrated severe accident code, the thermalhydraulic and fission product behavior were calculated with different codes. Six organizations participated in the thermal hydraulic portion of the code comparison exercise. These calculations were performed with RELAP 5, SCDAP/RELAP 5, and MAAP. The comparisons show generally well developed capabilities to determine the thermal-hydraulic conditions during the early stages of a severe core damage accident. Four participants submitted detailed fission product behavior calculations. Except for MAAP, as stated previously, the fission product inventory, core damage, fission product release, transport and deposition were calculated independently with different codes. Much larger differences than observed for the thermalhydraulic comparison were evident. The fission product inventory calculations were generally in good agreement with each other. Large differences were observed for release fractions and amounts of deposition. Net release calculations from the primary system were generally accurate within a factor of two or three for the more important fission products

  4. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  5. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    Science.gov (United States)

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  6. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  7. Learning Java

    CERN Document Server

    Niemeyer, Patrick

    2005-01-01

    Version 5.0 of the Java 2 Standard Edition SDK is the most important upgrade since Java first appeared a decade ago. With Java 5.0, you'll not only find substantial changes in the platform, but to the language itself-something that developers of Java took five years to complete. The main goal of Java 5.0 is to make it easier for you to develop safe, powerful code, but none of these improvements makes Java any easier to learn, even if you've programmed with Java for years. And that means our bestselling hands-on tutorial takes on even greater significance. Learning Java is the most widely sou

  8. Codes and standards and other guidance cited in regulatory documents

    International Nuclear Information System (INIS)

    Nickolaus, J.R.; Bohlander, K.L.

    1996-08-01

    As part of the U.S. Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program (SRP-UDP), Pacific Northwest National Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. The SRP-UDP has been completed and the SRP-Maintenance Program (SRP-MP) is now maintaining this listing. Besides updating previous information, Revision 3 adds approximately 80 citations. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC's Bulletins, Information Notices, Circulars, Enforcement Manual, Generic Letters, Inspection Manual, Policy Statements, Regulatory Guides, Standard Technical Specifications and the Standard Review Plan (NUREG-0800)

  9. Codes and standards and other guidance cited in regulatory documents

    Energy Technology Data Exchange (ETDEWEB)

    Nickolaus, J.R.; Bohlander, K.L.

    1996-08-01

    As part of the U.S. Nuclear Regulatory Commission (NRC) Standard Review Plan Update and Development Program (SRP-UDP), Pacific Northwest National Laboratory developed a listing of industry consensus codes and standards and other government and industry guidance referred to in regulatory documents. The SRP-UDP has been completed and the SRP-Maintenance Program (SRP-MP) is now maintaining this listing. Besides updating previous information, Revision 3 adds approximately 80 citations. This listing identifies the version of the code or standard cited in the regulatory document, the regulatory document, and the current version of the code or standard. It also provides a summary characterization of the nature of the citation. This listing was developed from electronic searches of the Code of Federal Regulations and the NRC`s Bulletins, Information Notices, Circulars, Enforcement Manual, Generic Letters, Inspection Manual, Policy Statements, Regulatory Guides, Standard Technical Specifications and the Standard Review Plan (NUREG-0800).

  10. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  11. Generalized instantly decodable network coding for relay-assisted networks

    KAUST Repository

    Elmahdy, Adel M.

    2013-09-01

    In this paper, we investigate the problem of minimizing the frame completion delay for Instantly Decodable Network Coding (IDNC) in relay-assisted wireless multicast networks. We first propose a packet recovery algorithm in the single relay topology which employs generalized IDNC instead of strict IDNC previously proposed in the literature for the same relay-assisted topology. This use of generalized IDNC is supported by showing that it is a super-set of the strict IDNC scheme, and thus can generate coding combinations that are at least as efficient as strict IDNC in reducing the average completion delay. We then extend our study to the multiple relay topology and propose a joint generalized IDNC and relay selection algorithm. This proposed algorithm benefits from the reception diversity of the multiple relays to further reduce the average completion delay in the network. Simulation results show that our proposed solutions achieve much better performance compared to previous solutions in the literature. © 2013 IEEE.

  12. QR code based noise-free optical encryption and decryption of a gray scale image

    Science.gov (United States)

    Jiao, Shuming; Zou, Wenbin; Li, Xia

    2017-03-01

    In optical encryption systems, speckle noise is one major challenge in obtaining high quality decrypted images. This problem can be addressed by employing a QR code based noise-free scheme. Previous works have been conducted for optically encrypting a few characters or a short expression employing QR codes. This paper proposes a practical scheme for optically encrypting and decrypting a gray-scale image based on QR codes for the first time. The proposed scheme is compatible with common QR code generators and readers. Numerical simulation results reveal the proposed method can encrypt and decrypt an input image correctly.

  13. Inclusion of pressure and flow in the KITES MHD equilibrium code

    International Nuclear Information System (INIS)

    Raburn, Daniel; Fukuyama, Atsushi

    2013-01-01

    One of the simplest self-consistent models of a plasma is single-fluid magnetohydrodynamic (MHD) equilibrium with no bulk fluid flow under axisymmetry. However, both fluid flow and non-axisymmetric effects can significantly impact plasma equilibrium and confinement properties: in particular, fluid flow can produce profile pedestals, and non-axisymmetric effects can produce islands and stochastic regions. There exist a number of computational codes which are capable of calculating equilibria with arbitrary flow or with non-axisymmetric effects. Previously, a concept for a code to calculate MHD equilibria with flow in non-axisymmetric systems was presented, called the KITES (Kyoto ITerative Equilibrium Solver) code. Since then, many of the computational modules for the KITES code have been completed, and the work-in-progress KITES code has been used to calculate non-axisymmetric force-free equilibria. Additional computational modules are required to allow the KITES code to calculate equilibria with pressure and flow. Here, the authors report on the approaches used in developing these modules and provide a sample calculation with pressure. (author)

  14. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  16. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  17. Incremental online object learning in a vehicular radar-vision fusion framework

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Zhengping [Los Alamos National Laboratory; Weng, Juyang [Los Alamos National Laboratory; Luciw, Matthew [IEEE; Zeng, Shuqing [IEEE

    2010-10-19

    In this paper, we propose an object learning system that incorporates sensory information from an automotive radar system and a video camera. The radar system provides a coarse attention for the focus of visual analysis on relatively small areas within the image plane. The attended visual areas are coded and learned by a 3-layer neural network utilizing what is called in-place learning, where every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through inhibitory and excitatory connections with other neurons. The modeled bottom-up, lateral, and top-down connections in the network enable sensory sparse coding, unsupervised learning and supervised learning to occur concurrently. The presented work is applied to learn two types of encountered objects in multiple outdoor driving settings. Cross validation results show the overall recognition accuracy above 95% for the radar-attended window images. In comparison with the uncoded representation and purely unsupervised learning (without top-down connection), the proposed network improves the recognition rate by 15.93% and 6.35% respectively. The proposed system is also compared with other learning algorithms favorably. The result indicates that our learning system is the only one to fit all the challenging criteria for the development of an incremental and online object learning system.

  18. [The QR code in society, economy and medicine--fields of application, options and chances].

    Science.gov (United States)

    Flaig, Benno; Parzeller, Markus

    2011-01-01

    2D codes like the QR Code ("Quick Response") are becoming more and more common in society and medicine. The application spectrum and benefits in medicine and other fields are described. 2D codes can be created free of charge on any computer with internet access without any previous knowledge. The codes can be easily used in publications, presentations, on business cards and posters. Editors choose between contact details, text or a hyperlink as information behind the code. At expert conferences, linkage by QR Code allows the audience to download presentations and posters quickly. The documents obtained can then be saved, printed, processed etc. Fast access to stored data in the internet makes it possible to integrate additional and explanatory multilingual videos into medical posters. In this context, a combination of different technologies (printed handout, QR Code and screen) may be reasonable.

  19. Bayesian decision support for coding occupational injury data.

    Science.gov (United States)

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  20. The Italian Code of Medical Deontology: characterizing features of its 2014 edition.

    Science.gov (United States)

    Conti, Andrea Alberto

    2015-09-14

    The latest edition of the Italian Code of Medical Deontology has been released by the Italian Federation of the Registers of Physicians and Dentists in May 2014 (1). The previous edition of the Italian Code dated back to 2006 (2), and it has been integrated and updated by a multi-professional and inter-disciplinary panel involving, besides physicians, representatives of scientific societies and trade unions, jurisconsults and experts in bioethics....

  1. Elastic creep-fatigue evaluation for ASME [American Society of Mechanical Engineers] code

    International Nuclear Information System (INIS)

    Severud, L.K.; Winkel, B.V.

    1987-02-01

    Reassessment of past ASME N-47 creep-fatigue rules have been under way by committee members. The new proposed elastic creep-fatigue methods are easier to apply than those previously in the code case. They also provide a wider range of practical application while still providing conservative assessments. It is expected that new N-47 code rules for elastic creep-fatigue evaluation will be adopted in the near future

  2. Repurposeable Learning Objects Linked to Teaching and Learning Styles

    Directory of Open Access Journals (Sweden)

    Jeremy Dunning

    2004-02-01

    Full Text Available Multimedia learning objects are an essential component of high quality, technology-mediated instruction. Learning objects allow the student to use the content learned in a particular part of a course and; 1. demonstrate mastery of the content, 2. apply that knowledge to solving a problem, and 3. use the content in a critical thinking exercise that both demonstrates mastery and allows the student to place the content within the context of the larger topic of the course. The difficulty associated with the use of learning objects on a broad scale is that they require programming skills most professors and instructors do not possess. Learning objects also tend to be custom productions and are defined in terms of the programming and code terminology, further limiting the professor's ability to understand how they are created. Learning objects defined in terms of styles of learning and teaching allow professors and instructors to develop a deeper understanding of the learning objects and the design process. A set of learning objects has been created that are designed for some of the important styles of learning and teaching. They include; visual learning, writing skills, critical thinking, time-revealed scenarios, case studies and empirical observation. The learning objects are designed and described in terms that the average instructor can readily understand , redesign and incorporate into their own courses. They are also designed in such a way that they can readily be repurposed for new applications in other courses and subject areas, with little or no additional programming.

  3. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. Modification of PRETOR Code to Be Applied to Transport Simulation in Stellarators

    International Nuclear Information System (INIS)

    Fontanet, J.; Castejon, F.; Dies, J.; Fontdecaba, J.; Alejaldre, C.

    2001-01-01

    The 1.5 D transport code PRETOR, that has been previously used to simulate tokamak plasmas, has been modified to perform transport analysis in stellarator geometry. The main modifications that have been introduced in the code are related with the magnetic equilibrium and with the modelling of energy and particle transport. Therefore, PRETOR- Stellarator version has been achieved and the code is suitable to perform simulations on stellarator plasmas. As an example, PRETOR- Stellarator has been used in the transport analysis of several Heliac Flexible TJ-II shots, and the results are compared with those obtained using PROCTR code. These results are also compared with the obtained using the tokamak version of PRETOR to show the importance of the introduced changes. (Author) 18 refs

  6. Do Librarians Have a Shared Set of Values? A Comparative Study of 36 Codes of Ethics Based on Gorman's "Enduring Values"

    Science.gov (United States)

    Foster, Catherine; McMenemy, David

    2012-01-01

    Thirty-six ethical codes from national professional associations were studied, the aim to test whether librarians have global shared values or if political and cultural contexts have significantly influenced the codes' content. Gorman's eight core values of stewardship, service, intellectual freedom, rationalism, literacy and learning, equity of…

  7. User Manual for the NASA Glenn Ice Accretion Code LEWICE. Version 2.2.2

    Science.gov (United States)

    Wright, William B.

    2002-01-01

    A research project is underway at NASA Glenn to produce a computer code which can accurately predict ice growth under a wide range of meteorological conditions for any aircraft surface. This report will present a description of the code inputs and outputs from version 2.2.2 of this code, which is called LEWICE. This version differs from release 2.0 due to the addition of advanced thermal analysis capabilities for de-icing and anti-icing applications using electrothermal heaters or bleed air applications. An extensive effort was also undertaken to compare the results against the database of electrothermal results which have been generated in the NASA Glenn Icing Research Tunnel (IRT) as was performed for the validation effort for version 2.0. This report will primarily describe the features of the software related to the use of the program. Appendix A of this report has been included to list some of the inner workings of the software or the physical models used. This information is also available in the form of several unpublished documents internal to NASA. This report is intended as a replacement for all previous user manuals of LEWICE. In addition to describing the changes and improvements made for this version, information from previous manuals may be duplicated so that the user will not need to consult previous manuals to use this code.

  8. Exploring the Engagement Effects of Visual Programming Language for Data Structure Courses

    Science.gov (United States)

    Chang, Chih-Kai; Yang, Ya-Fei; Tsai, Yu-Tzu

    2017-01-01

    Previous research indicates that understanding the state of learning motivation enables researchers to deeply understand students' learning processes. Studies have shown that visual programming languages use graphical code, enabling learners to learn effectively, improve learning effectiveness, increase learning fun, and offering various other…

  9. Lung volumes: measurement, clinical use, and coding.

    Science.gov (United States)

    Flesch, Judd D; Dine, C Jessica

    2012-08-01

    Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.

  10. Revised SWAT. The integrated burnup calculation code system

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Kiyosumi, Takehide

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  11. Revised SWAT. The integrated burnup calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Mochizuki, Hiroki [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kiyosumi, Takehide [The Japan Research Institute, Ltd., Tokyo (Japan)

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  12. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  13. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    Science.gov (United States)

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  14. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  15. Influence of the workplace on learning physical examination skills

    Science.gov (United States)

    2014-01-01

    Background Hospital clerkships are considered crucial for acquiring competencies such as diagnostic reasoning and clinical skills. The actual learning process in the hospital remains poorly understood. This study investigates how students learn clinical skills in workplaces and factors affecting this. Methods Six focus group sessions with 32 students in Internal Medicine rotation (4–9 students per group; sessions 80–90 minutes). Verbatim transcripts were analysed by emerging themes and coded independently by three researchers followed by constant comparison and axial coding. Results Students report to learn the systematics of the physical examination, gain agility and become able to recognise pathological signs. The learning process combines working alongside others and working independently with increasing responsibility for patient care. Helpful behaviour includes making findings explicit through patient files or during observation, feedback by abnormal findings and taking initiative. Factors affecting the process negatively include lack of supervision, uncertainty about tasks and expectations, and social context such as hierarchy of learners and perceived learning environment. Conclusion Although individual student experiences vary greatly between different hospitals, it seems that proactivity and participation are central drivers for learning. These results can improve the quality of existing programmes and help design new ways to learn physical examination skills. PMID:24678562

  16. Influence of the Migration Process on the Learning Performances of Fuzzy Knowledge Bases

    DEFF Research Database (Denmark)

    Akrout, Khaled; Baron, Luc; Balazinski, Marek

    2007-01-01

    This paper presents the influence of the process of migration between populations in GENO-FLOU, which is an environment of learning of fuzzy knowledge bases by genetic algorithms. Initially the algorithm did not use the process of migration. For the learning, the algorithm uses a hybrid coding......, binary for the base of rules and real for the data base. This hybrid coding used with a set of specialized operators of reproduction proven to be an effective environment of learning. Simulations were made in this environment by adding a process of migration. While varying the number of populations...

  17. ITER Dynamic Tritium Inventory Modeling Code

    International Nuclear Information System (INIS)

    Cristescu, Ioana-R.; Doerr, L.; Busigin, A.; Murdoch, D.

    2005-01-01

    A tool for tritium inventory evaluation within each sub-system of the Fuel Cycle of ITER is vital, with respect to both the process of licensing ITER and also for operation. It is very likely that measurements of total tritium inventories may not be possible for all sub-systems, however tritium accounting may be achieved by modeling its hold-up within each sub-system and by validating these models in real-time against the monitored flows and tritium streams between the systems. To get reliable results, an accurate dynamic modeling of the tritium content in each sub-system is necessary. In order to optimize the configuration and operation of the ITER fuel cycle, a dynamic fuel cycle model was developed progressively in the decade up to 2000-2001. As the design for some sub-systems from the fuel cycle (i.e. Vacuum pumping, Neutral Beam Injectors (NBI)) have substantially progressed meanwhile, a new code developed under a different platform to incorporate these modifications has been developed. The new code is taking over the models and algorithms for some subsystems, such as Isotope Separation System (ISS); where simplified models have been previously considered, more detailed have been introduced, as for the Water Detritiation System (WDS). To reflect all these changes, the new code developed inside EU participating team was nominated TRIMO (Tritium Inventory Modeling), to emphasize the use of the code on assessing the tritium inventory within ITER

  18. Algorithm Building and Learning Programming Languages Using a New Educational Paradigm

    Science.gov (United States)

    Jain, Anshul K.; Singhal, Manik; Gupta, Manu Sheel

    2011-08-01

    This research paper presents a new concept of using a single tool to associate syntax of various programming languages, algorithms and basic coding techniques. A simple framework has been programmed in Python that helps students learn skills to develop algorithms, and implement them in various programming languages. The tool provides an innovative and a unified graphical user interface for development of multimedia objects, educational games and applications. It also aids collaborative learning amongst students and teachers through an integrated mechanism based on Remote Procedure Calls. The paper also elucidates an innovative method for code generation to enable students to learn the basics of programming languages using drag-n-drop methods for image objects.

  19. International outage coding system for nuclear power plants. Results of a co-ordinated research project

    International Nuclear Information System (INIS)

    2004-05-01

    The experience obtained in each individual plant constitutes the most relevant source of information for improving its performance. However, experience of the level of the utility, country and worldwide is also extremely valuable, because there are limitations to what can be learned from in-house experience. But learning from the experience of others is admittedly difficult, if the information is not harmonized. Therefore, such systems should be standardized and applicable to all types of reactors satisfying the needs of the broad set of nuclear power plant operators worldwide and allowing experience to be shared internationally. To cope with the considerable amount of information gathered from nuclear power plants worldwide, it is necessary to codify the information facilitating the identification of causes of outages, systems or component failures. Therefore, the IAEA established a sponsored Co-ordinated Research Project (CRP) on the International Outage Coding System to develop a general, internationally applicable system of coding nuclear power plant outages, providing worldwide nuclear utilities with a standardized tool for reporting outage information. This TECDOC summarizes the results of this CRP and provides information for transformation of the historical outage data into the new coding system, taking into consideration the existing systems for coding nuclear power plant events (WANO, IAEA-IRS and IAEA PRIS) but avoiding duplication of efforts to the maximum possible extent

  20. Designing and Integrating Purposeful Learning in Game Play: A Systematic Review

    Science.gov (United States)

    Ke, Fengfeng

    2016-01-01

    Via a systematic review of the literature on learning games, this article presents a systematic discussion on the design of intrinsic integration of domain-specific learning in game mechanics and game world design. A total of 69 articles ultimately met the inclusion criteria and were coded for the literature synthesis. Exemplary learning games…